var/home/core/zuul-output/0000755000175000017500000000000015115623003014521 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015115626021015470 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log0000644000000000000000002364126015115626012017700 0ustar rootrootDec 08 19:29:03 crc systemd[1]: Starting Kubernetes Kubelet... Dec 08 19:29:03 crc kubenswrapper[5125]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 19:29:03 crc kubenswrapper[5125]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 08 19:29:03 crc kubenswrapper[5125]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 19:29:03 crc kubenswrapper[5125]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 19:29:03 crc kubenswrapper[5125]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 08 19:29:03 crc kubenswrapper[5125]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.533313 5125 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536178 5125 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536202 5125 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536208 5125 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536215 5125 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536220 5125 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536226 5125 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536232 5125 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536237 5125 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536243 5125 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536249 5125 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536254 5125 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536259 5125 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536264 5125 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536269 5125 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536274 5125 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536290 5125 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536295 5125 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536299 5125 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536306 5125 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536311 5125 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536316 5125 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536320 5125 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536325 5125 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536330 5125 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536335 5125 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536340 5125 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536345 5125 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536349 5125 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536354 5125 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536359 5125 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536363 5125 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536368 5125 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536373 5125 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536377 5125 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536383 5125 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536387 5125 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536392 5125 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536398 5125 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536402 5125 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536407 5125 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536413 5125 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536418 5125 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536422 5125 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536427 5125 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536432 5125 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536437 5125 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536442 5125 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536447 5125 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536452 5125 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536457 5125 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536462 5125 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536467 5125 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536472 5125 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536476 5125 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536481 5125 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536488 5125 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536494 5125 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536499 5125 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536505 5125 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536510 5125 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536516 5125 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536526 5125 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536538 5125 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536544 5125 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536550 5125 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536556 5125 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536563 5125 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536569 5125 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536575 5125 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536655 5125 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536662 5125 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536669 5125 feature_gate.go:328] unrecognized feature gate: Example Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536674 5125 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536681 5125 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536686 5125 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536691 5125 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536696 5125 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536701 5125 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536708 5125 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536714 5125 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536719 5125 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536725 5125 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536729 5125 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536734 5125 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536739 5125 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.536743 5125 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537286 5125 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537295 5125 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537300 5125 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537304 5125 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537311 5125 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537315 5125 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537320 5125 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537325 5125 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537329 5125 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537334 5125 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537339 5125 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537344 5125 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537348 5125 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537353 5125 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537357 5125 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537365 5125 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537370 5125 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537374 5125 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537379 5125 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537385 5125 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537390 5125 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537394 5125 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537399 5125 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537404 5125 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537408 5125 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537413 5125 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537417 5125 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537422 5125 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537427 5125 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537431 5125 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537436 5125 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537441 5125 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537446 5125 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537450 5125 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537455 5125 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537459 5125 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537465 5125 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537470 5125 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537475 5125 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537480 5125 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537484 5125 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537489 5125 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537494 5125 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537498 5125 feature_gate.go:328] unrecognized feature gate: Example Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537503 5125 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537508 5125 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537512 5125 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537519 5125 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537526 5125 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537532 5125 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537538 5125 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537543 5125 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537549 5125 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537554 5125 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537559 5125 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537563 5125 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537568 5125 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537573 5125 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537578 5125 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537582 5125 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537588 5125 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537593 5125 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537598 5125 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537603 5125 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537626 5125 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537631 5125 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537636 5125 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537640 5125 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537645 5125 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537650 5125 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537654 5125 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537659 5125 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537663 5125 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537668 5125 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537673 5125 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537678 5125 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537683 5125 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537687 5125 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537692 5125 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537700 5125 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537706 5125 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537712 5125 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537717 5125 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537722 5125 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537726 5125 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.537732 5125 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538304 5125 flags.go:64] FLAG: --address="0.0.0.0" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538319 5125 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538329 5125 flags.go:64] FLAG: --anonymous-auth="true" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538337 5125 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538344 5125 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538350 5125 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538357 5125 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538365 5125 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538371 5125 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538376 5125 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538382 5125 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538388 5125 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538393 5125 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538398 5125 flags.go:64] FLAG: --cgroup-root="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538404 5125 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538409 5125 flags.go:64] FLAG: --client-ca-file="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538415 5125 flags.go:64] FLAG: --cloud-config="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538420 5125 flags.go:64] FLAG: --cloud-provider="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538425 5125 flags.go:64] FLAG: --cluster-dns="[]" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538433 5125 flags.go:64] FLAG: --cluster-domain="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538438 5125 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538444 5125 flags.go:64] FLAG: --config-dir="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538449 5125 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538455 5125 flags.go:64] FLAG: --container-log-max-files="5" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538462 5125 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538470 5125 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538476 5125 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538481 5125 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538487 5125 flags.go:64] FLAG: --contention-profiling="false" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538492 5125 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538497 5125 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538503 5125 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538508 5125 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538737 5125 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538744 5125 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538755 5125 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538767 5125 flags.go:64] FLAG: --enable-load-reader="false" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538774 5125 flags.go:64] FLAG: --enable-server="true" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538781 5125 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538791 5125 flags.go:64] FLAG: --event-burst="100" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538798 5125 flags.go:64] FLAG: --event-qps="50" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538806 5125 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538813 5125 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538821 5125 flags.go:64] FLAG: --eviction-hard="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538831 5125 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538838 5125 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538846 5125 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538853 5125 flags.go:64] FLAG: --eviction-soft="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538860 5125 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538866 5125 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538873 5125 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538880 5125 flags.go:64] FLAG: --experimental-mounter-path="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538886 5125 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538891 5125 flags.go:64] FLAG: --fail-swap-on="true" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538896 5125 flags.go:64] FLAG: --feature-gates="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538903 5125 flags.go:64] FLAG: --file-check-frequency="20s" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538908 5125 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538920 5125 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538926 5125 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538931 5125 flags.go:64] FLAG: --healthz-port="10248" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538937 5125 flags.go:64] FLAG: --help="false" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538942 5125 flags.go:64] FLAG: --hostname-override="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538948 5125 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538956 5125 flags.go:64] FLAG: --http-check-frequency="20s" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538962 5125 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538968 5125 flags.go:64] FLAG: --image-credential-provider-config="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538974 5125 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538979 5125 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538984 5125 flags.go:64] FLAG: --image-service-endpoint="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538989 5125 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538994 5125 flags.go:64] FLAG: --kube-api-burst="100" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.538999 5125 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539005 5125 flags.go:64] FLAG: --kube-api-qps="50" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539010 5125 flags.go:64] FLAG: --kube-reserved="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539016 5125 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539020 5125 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539026 5125 flags.go:64] FLAG: --kubelet-cgroups="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539031 5125 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539036 5125 flags.go:64] FLAG: --lock-file="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539236 5125 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539245 5125 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539252 5125 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539270 5125 flags.go:64] FLAG: --log-json-split-stream="false" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539282 5125 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539289 5125 flags.go:64] FLAG: --log-text-split-stream="false" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539295 5125 flags.go:64] FLAG: --logging-format="text" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539302 5125 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539309 5125 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539315 5125 flags.go:64] FLAG: --manifest-url="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539325 5125 flags.go:64] FLAG: --manifest-url-header="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539335 5125 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539342 5125 flags.go:64] FLAG: --max-open-files="1000000" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539351 5125 flags.go:64] FLAG: --max-pods="110" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539358 5125 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539364 5125 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539373 5125 flags.go:64] FLAG: --memory-manager-policy="None" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539378 5125 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539384 5125 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539392 5125 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539398 5125 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539411 5125 flags.go:64] FLAG: --node-status-max-images="50" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539416 5125 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539422 5125 flags.go:64] FLAG: --oom-score-adj="-999" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539427 5125 flags.go:64] FLAG: --pod-cidr="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539432 5125 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539442 5125 flags.go:64] FLAG: --pod-manifest-path="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539448 5125 flags.go:64] FLAG: --pod-max-pids="-1" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539453 5125 flags.go:64] FLAG: --pods-per-core="0" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539459 5125 flags.go:64] FLAG: --port="10250" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539464 5125 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539470 5125 flags.go:64] FLAG: --provider-id="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539476 5125 flags.go:64] FLAG: --qos-reserved="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539482 5125 flags.go:64] FLAG: --read-only-port="10255" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539487 5125 flags.go:64] FLAG: --register-node="true" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539493 5125 flags.go:64] FLAG: --register-schedulable="true" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539498 5125 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539507 5125 flags.go:64] FLAG: --registry-burst="10" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539512 5125 flags.go:64] FLAG: --registry-qps="5" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539518 5125 flags.go:64] FLAG: --reserved-cpus="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539523 5125 flags.go:64] FLAG: --reserved-memory="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539529 5125 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539534 5125 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539541 5125 flags.go:64] FLAG: --rotate-certificates="false" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539546 5125 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539552 5125 flags.go:64] FLAG: --runonce="false" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539557 5125 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539562 5125 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539570 5125 flags.go:64] FLAG: --seccomp-default="false" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539575 5125 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539581 5125 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539586 5125 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539592 5125 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539598 5125 flags.go:64] FLAG: --storage-driver-password="root" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539603 5125 flags.go:64] FLAG: --storage-driver-secure="false" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539634 5125 flags.go:64] FLAG: --storage-driver-table="stats" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539640 5125 flags.go:64] FLAG: --storage-driver-user="root" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539646 5125 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539651 5125 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539657 5125 flags.go:64] FLAG: --system-cgroups="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539662 5125 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539671 5125 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539676 5125 flags.go:64] FLAG: --tls-cert-file="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539681 5125 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539933 5125 flags.go:64] FLAG: --tls-min-version="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539939 5125 flags.go:64] FLAG: --tls-private-key-file="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539945 5125 flags.go:64] FLAG: --topology-manager-policy="none" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539950 5125 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539956 5125 flags.go:64] FLAG: --topology-manager-scope="container" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539962 5125 flags.go:64] FLAG: --v="2" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539969 5125 flags.go:64] FLAG: --version="false" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539976 5125 flags.go:64] FLAG: --vmodule="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539983 5125 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.539989 5125 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540115 5125 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540126 5125 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540132 5125 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540137 5125 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540142 5125 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540148 5125 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540155 5125 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540160 5125 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540165 5125 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540170 5125 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540175 5125 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540181 5125 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540185 5125 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540190 5125 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540195 5125 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540200 5125 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540205 5125 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540210 5125 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540214 5125 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540219 5125 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540224 5125 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540228 5125 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540233 5125 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540239 5125 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540244 5125 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540249 5125 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540254 5125 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540259 5125 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540264 5125 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540269 5125 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540274 5125 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540279 5125 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540283 5125 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540290 5125 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540294 5125 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540299 5125 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540304 5125 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540308 5125 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540315 5125 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540320 5125 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540325 5125 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540330 5125 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540334 5125 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540339 5125 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540345 5125 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540350 5125 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540354 5125 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540359 5125 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540363 5125 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540369 5125 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540376 5125 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540381 5125 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540386 5125 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540391 5125 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540396 5125 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540401 5125 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540406 5125 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540411 5125 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540415 5125 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540420 5125 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540425 5125 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540429 5125 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540434 5125 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540439 5125 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540443 5125 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540450 5125 feature_gate.go:328] unrecognized feature gate: Example Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540455 5125 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540460 5125 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540465 5125 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540469 5125 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540477 5125 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540481 5125 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540486 5125 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540491 5125 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540495 5125 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540501 5125 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540506 5125 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540511 5125 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540516 5125 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540521 5125 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540526 5125 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540531 5125 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540535 5125 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540540 5125 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540544 5125 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.540549 5125 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.540730 5125 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.555902 5125 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.555945 5125 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556020 5125 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556028 5125 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556033 5125 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556039 5125 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556045 5125 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556050 5125 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556056 5125 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556062 5125 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556068 5125 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556073 5125 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556077 5125 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556083 5125 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556088 5125 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556093 5125 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556098 5125 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556104 5125 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556109 5125 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556115 5125 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556121 5125 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556126 5125 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556132 5125 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556137 5125 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556142 5125 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556147 5125 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556154 5125 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556158 5125 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556163 5125 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556168 5125 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556172 5125 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556177 5125 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556182 5125 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556187 5125 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556191 5125 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556196 5125 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556202 5125 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556207 5125 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556211 5125 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556216 5125 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556221 5125 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556226 5125 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556230 5125 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556235 5125 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556240 5125 feature_gate.go:328] unrecognized feature gate: Example Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556247 5125 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556255 5125 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556261 5125 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556265 5125 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556271 5125 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556277 5125 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556283 5125 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556288 5125 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556297 5125 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556305 5125 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556310 5125 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556314 5125 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556319 5125 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556324 5125 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556329 5125 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556333 5125 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556338 5125 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556343 5125 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556348 5125 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556353 5125 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556359 5125 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556365 5125 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556370 5125 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556375 5125 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556380 5125 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556385 5125 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556389 5125 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556395 5125 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556400 5125 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556404 5125 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556409 5125 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556414 5125 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556419 5125 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556425 5125 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556430 5125 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556435 5125 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556441 5125 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556446 5125 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556452 5125 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556458 5125 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556464 5125 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556469 5125 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556474 5125 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.556483 5125 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556700 5125 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556712 5125 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556717 5125 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556722 5125 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556731 5125 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556740 5125 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556746 5125 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556753 5125 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556758 5125 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556764 5125 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556769 5125 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556775 5125 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556779 5125 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556784 5125 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556789 5125 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556794 5125 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556799 5125 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556805 5125 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556809 5125 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556817 5125 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556825 5125 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556832 5125 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556839 5125 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556846 5125 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556852 5125 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556858 5125 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556863 5125 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556870 5125 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556876 5125 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556882 5125 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556888 5125 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556895 5125 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556901 5125 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556907 5125 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556914 5125 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556920 5125 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556926 5125 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556930 5125 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556935 5125 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556941 5125 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556947 5125 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556953 5125 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556959 5125 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556965 5125 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556971 5125 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556978 5125 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556984 5125 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556991 5125 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.556997 5125 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557004 5125 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557010 5125 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557016 5125 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557022 5125 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557028 5125 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557033 5125 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557042 5125 feature_gate.go:328] unrecognized feature gate: Example Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557048 5125 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557054 5125 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557060 5125 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557066 5125 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557072 5125 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557078 5125 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557084 5125 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557089 5125 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557094 5125 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557099 5125 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557104 5125 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557108 5125 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557114 5125 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557118 5125 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557123 5125 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557128 5125 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557133 5125 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557138 5125 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557143 5125 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557147 5125 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557152 5125 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557157 5125 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557162 5125 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557167 5125 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557172 5125 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557177 5125 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557182 5125 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557187 5125 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557192 5125 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.557196 5125 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.557205 5125 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.557413 5125 server.go:962] "Client rotation is on, will bootstrap in background" Dec 08 19:29:03 crc kubenswrapper[5125]: E1208 19:29:03.561937 5125 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.566436 5125 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.566585 5125 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.568168 5125 server.go:1019] "Starting client certificate rotation" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.568315 5125 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.568376 5125 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.582447 5125 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 08 19:29:03 crc kubenswrapper[5125]: E1208 19:29:03.584977 5125 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.586487 5125 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.601837 5125 log.go:25] "Validated CRI v1 runtime API" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.627142 5125 log.go:25] "Validated CRI v1 image API" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.628814 5125 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.631964 5125 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2025-12-08-19-23-04-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.632000 5125 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:46 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.652819 5125 manager.go:217] Machine: {Timestamp:2025-12-08 19:29:03.649656109 +0000 UTC m=+0.420146423 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33649926144 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:3204b44a-5260-4c04-b0d1-92575bcb7d69 BootID:cc970274-9f45-4e00-af2e-908ff2f74194 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16824963072 Type:vfs Inodes:1048576 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:46 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:80:a6:a7 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:80:a6:a7 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:ff:db:dd Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:c4:24:82 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:39:44:6e Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:85:ae:07 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:d2:05:6a:38:90:17 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:de:82:ed:99:58:c3 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649926144 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.653084 5125 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.653248 5125 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.655212 5125 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.655253 5125 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.655462 5125 topology_manager.go:138] "Creating topology manager with none policy" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.655474 5125 container_manager_linux.go:306] "Creating device plugin manager" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.655499 5125 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.656465 5125 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.656952 5125 state_mem.go:36] "Initialized new in-memory state store" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.657130 5125 server.go:1267] "Using root directory" path="/var/lib/kubelet" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.658007 5125 kubelet.go:491] "Attempting to sync node with API server" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.658028 5125 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.658044 5125 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.658065 5125 kubelet.go:397] "Adding apiserver pod source" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.658085 5125 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.660780 5125 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.660798 5125 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.662165 5125 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.662189 5125 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.664190 5125 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Dec 08 19:29:03 crc kubenswrapper[5125]: E1208 19:29:03.664277 5125 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 19:29:03 crc kubenswrapper[5125]: E1208 19:29:03.664293 5125 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.664431 5125 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.665086 5125 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.665572 5125 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.665601 5125 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.665635 5125 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.665645 5125 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.665656 5125 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.665688 5125 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.665699 5125 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.665709 5125 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.665722 5125 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.665741 5125 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.665758 5125 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.665924 5125 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.666506 5125 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.666526 5125 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.667929 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.676898 5125 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.677140 5125 server.go:1295] "Started kubelet" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.677253 5125 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.677359 5125 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.677430 5125 server_v1.go:47] "podresources" method="list" useActivePods=true Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.678383 5125 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 08 19:29:03 crc systemd[1]: Started Kubernetes Kubelet. Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.680076 5125 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.680116 5125 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Dec 08 19:29:03 crc kubenswrapper[5125]: E1208 19:29:03.679741 5125 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.174:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187f542fe7846e8b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:03.676935819 +0000 UTC m=+0.447426103,LastTimestamp:2025-12-08 19:29:03.676935819 +0000 UTC m=+0.447426103,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:03 crc kubenswrapper[5125]: E1208 19:29:03.682037 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:29:03 crc kubenswrapper[5125]: E1208 19:29:03.682841 5125 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="200ms" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.683056 5125 volume_manager.go:295] "The desired_state_of_world populator starts" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.683080 5125 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.683932 5125 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.687306 5125 server.go:317] "Adding debug handlers to kubelet server" Dec 08 19:29:03 crc kubenswrapper[5125]: E1208 19:29:03.687637 5125 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.690501 5125 factory.go:55] Registering systemd factory Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.690950 5125 factory.go:223] Registration of the systemd container factory successfully Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.691447 5125 factory.go:153] Registering CRI-O factory Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.691528 5125 factory.go:223] Registration of the crio container factory successfully Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.691640 5125 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.691669 5125 factory.go:103] Registering Raw factory Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.691685 5125 manager.go:1196] Started watching for new ooms in manager Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.692305 5125 manager.go:319] Starting recovery of all containers Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.716302 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.716352 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.716363 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.716371 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.716378 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.716386 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.716394 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717106 5125 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717131 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717145 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717157 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717166 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717174 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717182 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717192 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717203 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717211 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717219 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717227 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717235 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717243 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717250 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717259 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717267 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717275 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717284 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717292 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717308 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717316 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717327 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717334 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717343 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717351 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717360 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717367 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717375 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717384 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717393 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717401 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717445 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717453 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717461 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717469 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717476 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717483 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717492 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717500 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717511 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717518 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717526 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717535 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717542 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717549 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717556 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717563 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717572 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717580 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717592 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717600 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717623 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717631 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717639 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717646 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717654 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717663 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717670 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717679 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717686 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717693 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717702 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717709 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717716 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717723 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717771 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717782 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717792 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717802 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717812 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717823 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717832 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717840 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717850 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717857 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717866 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717875 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717894 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717907 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717914 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717922 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717929 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717937 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717944 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717952 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717961 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717968 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717976 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717987 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.717995 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718004 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718012 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718020 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718027 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718035 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718042 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718049 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718056 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718064 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718071 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718079 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718086 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718093 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718102 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718109 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718126 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718133 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718140 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718147 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718154 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718161 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718170 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718177 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718199 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718209 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718229 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718240 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718252 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718263 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718273 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718283 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718291 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718299 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718307 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718314 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718322 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718329 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718336 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718379 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718389 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718397 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718494 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718502 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718509 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718517 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718525 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718533 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718540 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718547 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718554 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718561 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718570 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718581 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718588 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718596 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718617 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718625 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718633 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718641 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718648 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718655 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718663 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718673 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718680 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718688 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718695 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718702 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718711 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718718 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718726 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718734 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.718741 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.722822 5125 manager.go:324] Recovery completed Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.723935 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724000 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724024 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724037 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724048 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724065 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724095 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724118 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724132 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724152 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724191 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724201 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724214 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724225 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724240 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724253 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724266 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724277 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724290 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724304 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724316 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724329 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724339 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724354 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724366 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724376 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724391 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724400 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724412 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724423 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724433 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724447 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724458 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724479 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724491 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724504 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724513 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724544 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724558 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724568 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724582 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724591 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724605 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724631 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724640 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724651 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724662 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724673 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724682 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724694 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724704 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724712 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724737 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724748 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724762 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724829 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724840 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724850 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.724999 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.725035 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.725054 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.725065 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.725080 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.725090 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.725101 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.725154 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.725181 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.725196 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.725207 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.725222 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.725235 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.725250 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.725263 5125 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.725272 5125 reconstruct.go:97] "Volume reconstruction finished" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.725280 5125 reconciler.go:26] "Reconciler: start to sync state" Dec 08 19:29:03 crc kubenswrapper[5125]: W1208 19:29:03.733404 5125 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/system.slice/crc-routes-controller.service/cpu.weight": read /sys/fs/cgroup/system.slice/crc-routes-controller.service/cpu.weight: no such device Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.738381 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.741049 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.741117 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.741148 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.742102 5125 cpu_manager.go:222] "Starting CPU manager" policy="none" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.742123 5125 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.742159 5125 state_mem.go:36] "Initialized new in-memory state store" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.745054 5125 policy_none.go:49] "None policy: Start" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.745091 5125 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.745108 5125 state_mem.go:35] "Initializing new in-memory state store" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.764343 5125 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.766106 5125 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.766141 5125 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.766168 5125 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.766177 5125 kubelet.go:2451] "Starting kubelet main sync loop" Dec 08 19:29:03 crc kubenswrapper[5125]: E1208 19:29:03.766219 5125 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 08 19:29:03 crc kubenswrapper[5125]: E1208 19:29:03.768381 5125 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 19:29:03 crc kubenswrapper[5125]: E1208 19:29:03.782807 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.788840 5125 manager.go:341] "Starting Device Plugin manager" Dec 08 19:29:03 crc kubenswrapper[5125]: E1208 19:29:03.788896 5125 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.788909 5125 server.go:85] "Starting device plugin registration server" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.789353 5125 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.789373 5125 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.789633 5125 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.789702 5125 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.789713 5125 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 08 19:29:03 crc kubenswrapper[5125]: E1208 19:29:03.795488 5125 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Dec 08 19:29:03 crc kubenswrapper[5125]: E1208 19:29:03.795547 5125 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.867050 5125 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.867273 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.868343 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.868390 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.868404 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.869140 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.869312 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.869352 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.869591 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.869657 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.869675 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.869986 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.870027 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.870046 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.870489 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.870556 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.870590 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.871080 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.871126 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.871141 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.871085 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.871220 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.871235 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.872002 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.872068 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.872102 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.872559 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.872597 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.872635 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.872645 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.872669 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.872684 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.873464 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.873520 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.873557 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.874019 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.874054 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.874073 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.874114 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.874138 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.874152 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.874970 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.875008 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.875509 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.875546 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.875563 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:03 crc kubenswrapper[5125]: E1208 19:29:03.883732 5125 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="400ms" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.889836 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.890401 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.890437 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.890449 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.890469 5125 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:29:03 crc kubenswrapper[5125]: E1208 19:29:03.891029 5125 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.174:6443: connect: connection refused" node="crc" Dec 08 19:29:03 crc kubenswrapper[5125]: E1208 19:29:03.901803 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:03 crc kubenswrapper[5125]: E1208 19:29:03.924691 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.928157 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.928199 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.928227 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.928548 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.928639 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.928666 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.928688 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.928708 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.928747 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.928768 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.928793 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.928824 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.928847 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.928873 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.928898 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.928917 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.928984 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.929033 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.929101 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.929166 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.929223 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.929242 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.929270 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.929319 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.929356 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.929396 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.929450 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.929397 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.929537 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: I1208 19:29:03.929841 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:03 crc kubenswrapper[5125]: E1208 19:29:03.934846 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:03 crc kubenswrapper[5125]: E1208 19:29:03.965544 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:03 crc kubenswrapper[5125]: E1208 19:29:03.972037 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.030569 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.030652 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.030669 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.030683 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.030699 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.030715 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.030751 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.030766 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.030781 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.030786 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.030846 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.030871 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.030917 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.030902 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.030973 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.030933 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.030798 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.031018 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.031036 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.031049 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.031055 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.031073 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.030941 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.031108 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.031136 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.030964 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.031159 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.031179 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.031187 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.031201 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.031214 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.031339 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.091943 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.093414 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.093517 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.093546 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.093660 5125 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:29:04 crc kubenswrapper[5125]: E1208 19:29:04.094598 5125 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.174:6443: connect: connection refused" node="crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.202505 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.225918 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: W1208 19:29:04.233298 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-a2104ae8cf8795b5f3ccc4046b66d03101bf44a6e7195e5b02e611b11d5cf199 WatchSource:0}: Error finding container a2104ae8cf8795b5f3ccc4046b66d03101bf44a6e7195e5b02e611b11d5cf199: Status 404 returned error can't find the container with id a2104ae8cf8795b5f3ccc4046b66d03101bf44a6e7195e5b02e611b11d5cf199 Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.235461 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.238243 5125 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 19:29:04 crc kubenswrapper[5125]: W1208 19:29:04.251625 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-9020bfdfe7d1f551017dc9e35925c69c6a344a14d3989c365c4dd5b5e16bd7d8 WatchSource:0}: Error finding container 9020bfdfe7d1f551017dc9e35925c69c6a344a14d3989c365c4dd5b5e16bd7d8: Status 404 returned error can't find the container with id 9020bfdfe7d1f551017dc9e35925c69c6a344a14d3989c365c4dd5b5e16bd7d8 Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.266317 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.273418 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:04 crc kubenswrapper[5125]: E1208 19:29:04.284509 5125 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="800ms" Dec 08 19:29:04 crc kubenswrapper[5125]: W1208 19:29:04.294433 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-9a57a0f25edf608b6617d2656a49df8a35b5c8f290242568c9102e7b27781121 WatchSource:0}: Error finding container 9a57a0f25edf608b6617d2656a49df8a35b5c8f290242568c9102e7b27781121: Status 404 returned error can't find the container with id 9a57a0f25edf608b6617d2656a49df8a35b5c8f290242568c9102e7b27781121 Dec 08 19:29:04 crc kubenswrapper[5125]: E1208 19:29:04.491957 5125 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.495303 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.497105 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.497149 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.497161 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.497207 5125 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:29:04 crc kubenswrapper[5125]: E1208 19:29:04.497667 5125 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.174:6443: connect: connection refused" node="crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.669130 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.772157 5125 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0" exitCode=0 Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.772243 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0"} Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.772334 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"a2104ae8cf8795b5f3ccc4046b66d03101bf44a6e7195e5b02e611b11d5cf199"} Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.772465 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.773210 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac"} Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.773240 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"9a57a0f25edf608b6617d2656a49df8a35b5c8f290242568c9102e7b27781121"} Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.773346 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.773926 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.773950 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.773958 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:04 crc kubenswrapper[5125]: E1208 19:29:04.774101 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.774586 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.774632 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.774645 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:04 crc kubenswrapper[5125]: E1208 19:29:04.774843 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.775318 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"d1a6ee7cc39cbce21b5d44e71db4af1388154261b0f4e46bf80a1c6aace1d18b"} Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.775350 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"7dceba5e983183951cdea8c3eaebee19ab7923a21f5f6600e1eb19e5a958eb49"} Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.776423 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e"} Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.776449 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"4a2d48da73a15231bee559f02d7e22b992076458893381ff8ef89b7539b4b5e1"} Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.776843 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.777374 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.777404 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.777414 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:04 crc kubenswrapper[5125]: E1208 19:29:04.777711 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.778111 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9"} Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.778138 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"9020bfdfe7d1f551017dc9e35925c69c6a344a14d3989c365c4dd5b5e16bd7d8"} Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.778268 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.779193 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.779230 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:04 crc kubenswrapper[5125]: I1208 19:29:04.779244 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:04 crc kubenswrapper[5125]: E1208 19:29:04.779448 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:04 crc kubenswrapper[5125]: E1208 19:29:04.786456 5125 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 19:29:05 crc kubenswrapper[5125]: E1208 19:29:05.086348 5125 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="1.6s" Dec 08 19:29:05 crc kubenswrapper[5125]: E1208 19:29:05.185140 5125 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 19:29:05 crc kubenswrapper[5125]: E1208 19:29:05.232871 5125 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.298828 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.300152 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.300239 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.300260 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.300303 5125 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:29:05 crc kubenswrapper[5125]: E1208 19:29:05.301061 5125 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.174:6443: connect: connection refused" node="crc" Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.671316 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.744154 5125 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 19:29:05 crc kubenswrapper[5125]: E1208 19:29:05.745666 5125 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.784054 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"6be3cefe94889f1e79893ae2e0cbc2c0e19b158c8b5d1fc78c2396198cdf1b63"} Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.784163 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"51dd4ebaac488ab269d08cb3c6bd1ab70695582228b86f0ee98bcf2efe730911"} Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.786196 5125 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e" exitCode=0 Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.786301 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e"} Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.786541 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.787599 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.787700 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.787722 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:05 crc kubenswrapper[5125]: E1208 19:29:05.788157 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.789698 5125 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9" exitCode=0 Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.789772 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9"} Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.789925 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.790534 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.790933 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.790960 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.790971 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:05 crc kubenswrapper[5125]: E1208 19:29:05.791178 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.791557 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.791573 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.791584 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.793389 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"caca8af5e19887a7e6708058ea051494b18a37f74e2c31cc984ee9e38f34a397"} Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.793704 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.794508 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.794550 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.794562 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:05 crc kubenswrapper[5125]: E1208 19:29:05.794851 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.796288 5125 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac" exitCode=0 Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.796338 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac"} Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.796482 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.797239 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.797300 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:05 crc kubenswrapper[5125]: I1208 19:29:05.797319 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:05 crc kubenswrapper[5125]: E1208 19:29:05.797671 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:06 crc kubenswrapper[5125]: E1208 19:29:06.597230 5125 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 19:29:06 crc kubenswrapper[5125]: I1208 19:29:06.668414 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.174:6443: connect: connection refused Dec 08 19:29:06 crc kubenswrapper[5125]: E1208 19:29:06.687872 5125 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="3.2s" Dec 08 19:29:06 crc kubenswrapper[5125]: I1208 19:29:06.803826 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"7eb9c33205053ee254860f931fb8051f331e26827a53bee03ec0451ad1c36124"} Dec 08 19:29:06 crc kubenswrapper[5125]: I1208 19:29:06.803894 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"df8ae2ed1ee6f83e167f23dd7edc5eaf5e881de6ea7d042f3d4184090b0cf6be"} Dec 08 19:29:06 crc kubenswrapper[5125]: I1208 19:29:06.803905 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"45dfdf1c59b5fb6c4c2329c90a050ab925412e0e70f48b865bbd4261ba6cf841"} Dec 08 19:29:06 crc kubenswrapper[5125]: I1208 19:29:06.804036 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:06 crc kubenswrapper[5125]: I1208 19:29:06.804496 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:06 crc kubenswrapper[5125]: I1208 19:29:06.804532 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:06 crc kubenswrapper[5125]: I1208 19:29:06.804544 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:06 crc kubenswrapper[5125]: E1208 19:29:06.804769 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:06 crc kubenswrapper[5125]: I1208 19:29:06.805800 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"b524051750cb775841e22d8cd5239926fb9dbb19325e7c8e9d0593caeab1da19"} Dec 08 19:29:06 crc kubenswrapper[5125]: I1208 19:29:06.806006 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:06 crc kubenswrapper[5125]: I1208 19:29:06.807459 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:06 crc kubenswrapper[5125]: I1208 19:29:06.807492 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:06 crc kubenswrapper[5125]: I1208 19:29:06.807506 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:06 crc kubenswrapper[5125]: E1208 19:29:06.807916 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:06 crc kubenswrapper[5125]: I1208 19:29:06.817219 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"be7cc8d52376599fa6e20ccc45f43544f765f5d0ca901360045e14c3441a4c05"} Dec 08 19:29:06 crc kubenswrapper[5125]: I1208 19:29:06.817271 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"6d33cb163457c854b355765916b3c29d258a9b0db805a51c89bd221aba35fb12"} Dec 08 19:29:06 crc kubenswrapper[5125]: I1208 19:29:06.817283 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"8c37e3585615ba4ff1e0e7d348bf306b89181474b72aebe5290f9cf2a9c706d0"} Dec 08 19:29:06 crc kubenswrapper[5125]: I1208 19:29:06.817292 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"a5e4699670d62181c1fafae8281271f7dd7e3a3694a21aa85a0431dc61994c3c"} Dec 08 19:29:06 crc kubenswrapper[5125]: I1208 19:29:06.818674 5125 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6" exitCode=0 Dec 08 19:29:06 crc kubenswrapper[5125]: I1208 19:29:06.818706 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6"} Dec 08 19:29:06 crc kubenswrapper[5125]: I1208 19:29:06.818925 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:06 crc kubenswrapper[5125]: I1208 19:29:06.819574 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:06 crc kubenswrapper[5125]: I1208 19:29:06.819643 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:06 crc kubenswrapper[5125]: I1208 19:29:06.819661 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:06 crc kubenswrapper[5125]: E1208 19:29:06.819920 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:06 crc kubenswrapper[5125]: I1208 19:29:06.901679 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:06 crc kubenswrapper[5125]: I1208 19:29:06.902879 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:06 crc kubenswrapper[5125]: I1208 19:29:06.902921 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:06 crc kubenswrapper[5125]: I1208 19:29:06.902930 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:06 crc kubenswrapper[5125]: I1208 19:29:06.902953 5125 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:29:06 crc kubenswrapper[5125]: E1208 19:29:06.903428 5125 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.174:6443: connect: connection refused" node="crc" Dec 08 19:29:06 crc kubenswrapper[5125]: I1208 19:29:06.973867 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:06 crc kubenswrapper[5125]: E1208 19:29:06.974409 5125 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.174:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 19:29:07 crc kubenswrapper[5125]: I1208 19:29:07.829479 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"4afa51403d07d17fada4ad9c4d680fdc6867966b26d0cac2c9848c6e52f8cf76"} Dec 08 19:29:07 crc kubenswrapper[5125]: I1208 19:29:07.829701 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:07 crc kubenswrapper[5125]: I1208 19:29:07.830743 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:07 crc kubenswrapper[5125]: I1208 19:29:07.830773 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:07 crc kubenswrapper[5125]: I1208 19:29:07.830783 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:07 crc kubenswrapper[5125]: E1208 19:29:07.830966 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:07 crc kubenswrapper[5125]: I1208 19:29:07.832595 5125 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b" exitCode=0 Dec 08 19:29:07 crc kubenswrapper[5125]: I1208 19:29:07.832644 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b"} Dec 08 19:29:07 crc kubenswrapper[5125]: I1208 19:29:07.832798 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:07 crc kubenswrapper[5125]: I1208 19:29:07.832856 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:07 crc kubenswrapper[5125]: I1208 19:29:07.833702 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:07 crc kubenswrapper[5125]: I1208 19:29:07.833725 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:07 crc kubenswrapper[5125]: I1208 19:29:07.833735 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:07 crc kubenswrapper[5125]: I1208 19:29:07.833769 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:07 crc kubenswrapper[5125]: I1208 19:29:07.833794 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:07 crc kubenswrapper[5125]: I1208 19:29:07.833806 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:07 crc kubenswrapper[5125]: E1208 19:29:07.833973 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:07 crc kubenswrapper[5125]: E1208 19:29:07.834337 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:08 crc kubenswrapper[5125]: I1208 19:29:08.839232 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"bd518b12329a228d3ba235314af632769596b1ca8a854f2caf622b9c3847816b"} Dec 08 19:29:08 crc kubenswrapper[5125]: I1208 19:29:08.839277 5125 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 08 19:29:08 crc kubenswrapper[5125]: I1208 19:29:08.839286 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"1b8499c0a2bf34333f40c474c394b71a76350a7fc194553cf807f2d5faa889c2"} Dec 08 19:29:08 crc kubenswrapper[5125]: I1208 19:29:08.839305 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"dffc632ffcdfed24afccbe6a28e61941232e1cd2efcbafd1f092ab148c0c1697"} Dec 08 19:29:08 crc kubenswrapper[5125]: I1208 19:29:08.839321 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"e9ed6b4f2152ebdc1484f71e24ba072cbf2b01f9d9feba86cfb7389754fdec5a"} Dec 08 19:29:08 crc kubenswrapper[5125]: I1208 19:29:08.839338 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"a8976fcbc73296c5af4cb1d7b4056d864b7d2cae6c8b19dc656ba85a228d2d23"} Dec 08 19:29:08 crc kubenswrapper[5125]: I1208 19:29:08.839308 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:08 crc kubenswrapper[5125]: I1208 19:29:08.839394 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:08 crc kubenswrapper[5125]: I1208 19:29:08.839521 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:08 crc kubenswrapper[5125]: I1208 19:29:08.840120 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:08 crc kubenswrapper[5125]: I1208 19:29:08.840142 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:08 crc kubenswrapper[5125]: I1208 19:29:08.840153 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:08 crc kubenswrapper[5125]: E1208 19:29:08.840353 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:08 crc kubenswrapper[5125]: I1208 19:29:08.840530 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:08 crc kubenswrapper[5125]: I1208 19:29:08.840571 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:08 crc kubenswrapper[5125]: I1208 19:29:08.840584 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:08 crc kubenswrapper[5125]: I1208 19:29:08.840992 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:08 crc kubenswrapper[5125]: I1208 19:29:08.841008 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:08 crc kubenswrapper[5125]: I1208 19:29:08.841016 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:08 crc kubenswrapper[5125]: E1208 19:29:08.841013 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:08 crc kubenswrapper[5125]: E1208 19:29:08.841271 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:08 crc kubenswrapper[5125]: I1208 19:29:08.844630 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:08 crc kubenswrapper[5125]: I1208 19:29:08.989323 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:09 crc kubenswrapper[5125]: I1208 19:29:09.611867 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Dec 08 19:29:09 crc kubenswrapper[5125]: I1208 19:29:09.840715 5125 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 08 19:29:09 crc kubenswrapper[5125]: I1208 19:29:09.840746 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:09 crc kubenswrapper[5125]: I1208 19:29:09.840769 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:09 crc kubenswrapper[5125]: I1208 19:29:09.841406 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:09 crc kubenswrapper[5125]: I1208 19:29:09.841443 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:09 crc kubenswrapper[5125]: I1208 19:29:09.841462 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:09 crc kubenswrapper[5125]: I1208 19:29:09.841508 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:09 crc kubenswrapper[5125]: I1208 19:29:09.841536 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:09 crc kubenswrapper[5125]: I1208 19:29:09.841553 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:09 crc kubenswrapper[5125]: E1208 19:29:09.841799 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:09 crc kubenswrapper[5125]: E1208 19:29:09.842115 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:09 crc kubenswrapper[5125]: I1208 19:29:09.870044 5125 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 19:29:10 crc kubenswrapper[5125]: I1208 19:29:10.024342 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:10 crc kubenswrapper[5125]: I1208 19:29:10.104426 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:10 crc kubenswrapper[5125]: I1208 19:29:10.105555 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:10 crc kubenswrapper[5125]: I1208 19:29:10.105634 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:10 crc kubenswrapper[5125]: I1208 19:29:10.105647 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:10 crc kubenswrapper[5125]: I1208 19:29:10.105673 5125 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:29:10 crc kubenswrapper[5125]: I1208 19:29:10.843238 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:10 crc kubenswrapper[5125]: I1208 19:29:10.843271 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:10 crc kubenswrapper[5125]: I1208 19:29:10.843863 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:10 crc kubenswrapper[5125]: I1208 19:29:10.843900 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:10 crc kubenswrapper[5125]: I1208 19:29:10.843912 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:10 crc kubenswrapper[5125]: I1208 19:29:10.844003 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:10 crc kubenswrapper[5125]: I1208 19:29:10.844034 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:10 crc kubenswrapper[5125]: I1208 19:29:10.844046 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:10 crc kubenswrapper[5125]: E1208 19:29:10.844266 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:10 crc kubenswrapper[5125]: E1208 19:29:10.844551 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:11 crc kubenswrapper[5125]: I1208 19:29:11.845369 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:11 crc kubenswrapper[5125]: I1208 19:29:11.846404 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:11 crc kubenswrapper[5125]: I1208 19:29:11.846464 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:11 crc kubenswrapper[5125]: I1208 19:29:11.846478 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:11 crc kubenswrapper[5125]: E1208 19:29:11.847051 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:12 crc kubenswrapper[5125]: I1208 19:29:12.163499 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:12 crc kubenswrapper[5125]: I1208 19:29:12.163730 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:12 crc kubenswrapper[5125]: I1208 19:29:12.165239 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:12 crc kubenswrapper[5125]: I1208 19:29:12.165290 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:12 crc kubenswrapper[5125]: I1208 19:29:12.165301 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:12 crc kubenswrapper[5125]: E1208 19:29:12.165738 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:12 crc kubenswrapper[5125]: I1208 19:29:12.171487 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:12 crc kubenswrapper[5125]: I1208 19:29:12.489013 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:12 crc kubenswrapper[5125]: I1208 19:29:12.847737 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:12 crc kubenswrapper[5125]: I1208 19:29:12.848735 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:12 crc kubenswrapper[5125]: I1208 19:29:12.848802 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:12 crc kubenswrapper[5125]: I1208 19:29:12.848822 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:12 crc kubenswrapper[5125]: E1208 19:29:12.849342 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:12 crc kubenswrapper[5125]: I1208 19:29:12.907589 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:13 crc kubenswrapper[5125]: I1208 19:29:13.431702 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:13 crc kubenswrapper[5125]: I1208 19:29:13.431906 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:13 crc kubenswrapper[5125]: I1208 19:29:13.432674 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:13 crc kubenswrapper[5125]: I1208 19:29:13.432707 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:13 crc kubenswrapper[5125]: I1208 19:29:13.432716 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:13 crc kubenswrapper[5125]: E1208 19:29:13.432959 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:13 crc kubenswrapper[5125]: I1208 19:29:13.493310 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 08 19:29:13 crc kubenswrapper[5125]: I1208 19:29:13.493679 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:13 crc kubenswrapper[5125]: I1208 19:29:13.494724 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:13 crc kubenswrapper[5125]: I1208 19:29:13.494799 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:13 crc kubenswrapper[5125]: I1208 19:29:13.494817 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:13 crc kubenswrapper[5125]: E1208 19:29:13.495476 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:13 crc kubenswrapper[5125]: E1208 19:29:13.795884 5125 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 19:29:13 crc kubenswrapper[5125]: I1208 19:29:13.850073 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:13 crc kubenswrapper[5125]: I1208 19:29:13.850847 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:13 crc kubenswrapper[5125]: I1208 19:29:13.850919 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:13 crc kubenswrapper[5125]: I1208 19:29:13.850947 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:13 crc kubenswrapper[5125]: E1208 19:29:13.851603 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:14 crc kubenswrapper[5125]: I1208 19:29:14.852547 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:14 crc kubenswrapper[5125]: I1208 19:29:14.853352 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:14 crc kubenswrapper[5125]: I1208 19:29:14.853399 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:14 crc kubenswrapper[5125]: I1208 19:29:14.853418 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:14 crc kubenswrapper[5125]: E1208 19:29:14.853972 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:14 crc kubenswrapper[5125]: I1208 19:29:14.860467 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:15 crc kubenswrapper[5125]: I1208 19:29:15.489180 5125 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Dec 08 19:29:15 crc kubenswrapper[5125]: I1208 19:29:15.489299 5125 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Dec 08 19:29:15 crc kubenswrapper[5125]: I1208 19:29:15.856485 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:15 crc kubenswrapper[5125]: I1208 19:29:15.857544 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:15 crc kubenswrapper[5125]: I1208 19:29:15.857602 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:15 crc kubenswrapper[5125]: I1208 19:29:15.857659 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:15 crc kubenswrapper[5125]: E1208 19:29:15.858137 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:17 crc kubenswrapper[5125]: I1208 19:29:17.203449 5125 trace.go:236] Trace[1768822229]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 19:29:07.201) (total time: 10001ms): Dec 08 19:29:17 crc kubenswrapper[5125]: Trace[1768822229]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (19:29:17.203) Dec 08 19:29:17 crc kubenswrapper[5125]: Trace[1768822229]: [10.001749441s] [10.001749441s] END Dec 08 19:29:17 crc kubenswrapper[5125]: E1208 19:29:17.204002 5125 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 19:29:17 crc kubenswrapper[5125]: I1208 19:29:17.327514 5125 trace.go:236] Trace[1461190032]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 19:29:07.325) (total time: 10001ms): Dec 08 19:29:17 crc kubenswrapper[5125]: Trace[1461190032]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (19:29:17.327) Dec 08 19:29:17 crc kubenswrapper[5125]: Trace[1461190032]: [10.001754903s] [10.001754903s] END Dec 08 19:29:17 crc kubenswrapper[5125]: E1208 19:29:17.327560 5125 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 19:29:17 crc kubenswrapper[5125]: I1208 19:29:17.674280 5125 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 19:29:17 crc kubenswrapper[5125]: I1208 19:29:17.674361 5125 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 08 19:29:17 crc kubenswrapper[5125]: I1208 19:29:17.680566 5125 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 19:29:17 crc kubenswrapper[5125]: I1208 19:29:17.680681 5125 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 08 19:29:18 crc kubenswrapper[5125]: I1208 19:29:18.849723 5125 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]log ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]etcd ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]poststarthook/generic-apiserver-start-informers ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]poststarthook/priority-and-fairness-filter ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]poststarthook/start-apiextensions-informers ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]poststarthook/start-apiextensions-controllers ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]poststarthook/crd-informer-synced ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]poststarthook/start-system-namespaces-controller ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 08 19:29:18 crc kubenswrapper[5125]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 08 19:29:18 crc kubenswrapper[5125]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]poststarthook/bootstrap-controller ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]poststarthook/start-kubernetes-service-cidr-controller ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]poststarthook/start-kube-aggregator-informers ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]poststarthook/apiservice-registration-controller ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]poststarthook/apiservice-discovery-controller ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]autoregister-completion ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]poststarthook/apiservice-openapi-controller ok Dec 08 19:29:18 crc kubenswrapper[5125]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 08 19:29:18 crc kubenswrapper[5125]: livez check failed Dec 08 19:29:18 crc kubenswrapper[5125]: I1208 19:29:18.849803 5125 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:29:19 crc kubenswrapper[5125]: E1208 19:29:19.888714 5125 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Dec 08 19:29:21 crc kubenswrapper[5125]: E1208 19:29:21.173630 5125 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 19:29:21 crc kubenswrapper[5125]: E1208 19:29:21.562843 5125 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.686713 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f542fe7846e8b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:03.676935819 +0000 UTC m=+0.447426103,LastTimestamp:2025-12-08 19:29:03.676935819 +0000 UTC m=+0.447426103,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: I1208 19:29:22.686795 5125 trace.go:236] Trace[1721611246]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 19:29:10.873) (total time: 11813ms): Dec 08 19:29:22 crc kubenswrapper[5125]: Trace[1721611246]: ---"Objects listed" error:runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope 11813ms (19:29:22.686) Dec 08 19:29:22 crc kubenswrapper[5125]: Trace[1721611246]: [11.813480157s] [11.813480157s] END Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.686930 5125 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.687808 5125 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 19:29:22 crc kubenswrapper[5125]: I1208 19:29:22.690623 5125 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 19:29:22 crc kubenswrapper[5125]: I1208 19:29:22.694411 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.694411 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f542feb575500 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:03.741089024 +0000 UTC m=+0.511579338,LastTimestamp:2025-12-08 19:29:03.741089024 +0000 UTC m=+0.511579338,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.694680 5125 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.703969 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f542feb58057d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:03.741134205 +0000 UTC m=+0.511624519,LastTimestamp:2025-12-08 19:29:03.741134205 +0000 UTC m=+0.511624519,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.705182 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f542feb586140 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:03.741157696 +0000 UTC m=+0.511648000,LastTimestamp:2025-12-08 19:29:03.741157696 +0000 UTC m=+0.511648000,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.711479 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f542fee5be692 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:03.791720082 +0000 UTC m=+0.562210356,LastTimestamp:2025-12-08 19:29:03.791720082 +0000 UTC m=+0.562210356,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.718432 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f542feb575500\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f542feb575500 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:03.741089024 +0000 UTC m=+0.511579338,LastTimestamp:2025-12-08 19:29:03.868371967 +0000 UTC m=+0.638862261,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.724386 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f542feb58057d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f542feb58057d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:03.741134205 +0000 UTC m=+0.511624519,LastTimestamp:2025-12-08 19:29:03.868397497 +0000 UTC m=+0.638887781,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.729936 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f542feb586140\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f542feb586140 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:03.741157696 +0000 UTC m=+0.511648000,LastTimestamp:2025-12-08 19:29:03.868410218 +0000 UTC m=+0.638900512,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: I1208 19:29:22.730215 5125 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:41570->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 08 19:29:22 crc kubenswrapper[5125]: I1208 19:29:22.730286 5125 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:41570->192.168.126.11:17697: read: connection reset by peer" Dec 08 19:29:22 crc kubenswrapper[5125]: I1208 19:29:22.730458 5125 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:41574->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 08 19:29:22 crc kubenswrapper[5125]: I1208 19:29:22.730554 5125 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:41574->192.168.126.11:17697: read: connection reset by peer" Dec 08 19:29:22 crc kubenswrapper[5125]: I1208 19:29:22.738576 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.738586 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f542feb575500\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f542feb575500 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:03.741089024 +0000 UTC m=+0.511579338,LastTimestamp:2025-12-08 19:29:03.86964001 +0000 UTC m=+0.640130304,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: I1208 19:29:22.739179 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:22 crc kubenswrapper[5125]: I1208 19:29:22.739949 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:22 crc kubenswrapper[5125]: I1208 19:29:22.740073 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:22 crc kubenswrapper[5125]: I1208 19:29:22.740086 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.740371 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.745544 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f542feb58057d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f542feb58057d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:03.741134205 +0000 UTC m=+0.511624519,LastTimestamp:2025-12-08 19:29:03.869667701 +0000 UTC m=+0.640157995,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: I1208 19:29:22.748732 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.752299 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f542feb586140\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f542feb586140 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:03.741157696 +0000 UTC m=+0.511648000,LastTimestamp:2025-12-08 19:29:03.869681981 +0000 UTC m=+0.640172265,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.758476 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f542feb575500\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f542feb575500 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:03.741089024 +0000 UTC m=+0.511579338,LastTimestamp:2025-12-08 19:29:03.87000859 +0000 UTC m=+0.640498874,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.768737 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f542feb58057d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f542feb58057d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:03.741134205 +0000 UTC m=+0.511624519,LastTimestamp:2025-12-08 19:29:03.870036871 +0000 UTC m=+0.640527155,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.773018 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f542feb586140\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f542feb586140 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:03.741157696 +0000 UTC m=+0.511648000,LastTimestamp:2025-12-08 19:29:03.870053431 +0000 UTC m=+0.640543715,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.777508 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f542feb575500\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f542feb575500 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:03.741089024 +0000 UTC m=+0.511579338,LastTimestamp:2025-12-08 19:29:03.871110689 +0000 UTC m=+0.641600963,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.781867 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f542feb58057d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f542feb58057d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:03.741134205 +0000 UTC m=+0.511624519,LastTimestamp:2025-12-08 19:29:03.87113449 +0000 UTC m=+0.641624764,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.786218 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f542feb586140\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f542feb586140 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:03.741157696 +0000 UTC m=+0.511648000,LastTimestamp:2025-12-08 19:29:03.87114611 +0000 UTC m=+0.641636384,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.790660 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f542feb575500\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f542feb575500 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:03.741089024 +0000 UTC m=+0.511579338,LastTimestamp:2025-12-08 19:29:03.871197411 +0000 UTC m=+0.641687695,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.794265 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f542feb58057d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f542feb58057d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:03.741134205 +0000 UTC m=+0.511624519,LastTimestamp:2025-12-08 19:29:03.871229042 +0000 UTC m=+0.641719326,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.797958 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f542feb586140\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f542feb586140 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:03.741157696 +0000 UTC m=+0.511648000,LastTimestamp:2025-12-08 19:29:03.871240792 +0000 UTC m=+0.641731076,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.801765 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f542feb575500\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f542feb575500 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:03.741089024 +0000 UTC m=+0.511579338,LastTimestamp:2025-12-08 19:29:03.872585008 +0000 UTC m=+0.643075302,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.806860 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f542feb58057d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f542feb58057d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:03.741134205 +0000 UTC m=+0.511624519,LastTimestamp:2025-12-08 19:29:03.872627379 +0000 UTC m=+0.643117663,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.810813 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f542feb586140\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f542feb586140 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:03.741157696 +0000 UTC m=+0.511648000,LastTimestamp:2025-12-08 19:29:03.872642839 +0000 UTC m=+0.643133123,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.816919 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f542feb575500\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f542feb575500 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:03.741089024 +0000 UTC m=+0.511579338,LastTimestamp:2025-12-08 19:29:03.87265991 +0000 UTC m=+0.643150204,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.820978 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f542feb58057d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f542feb58057d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:03.741134205 +0000 UTC m=+0.511624519,LastTimestamp:2025-12-08 19:29:03.87267721 +0000 UTC m=+0.643167504,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.826137 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f543008ff7cdd openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:04.238648541 +0000 UTC m=+1.009138815,LastTimestamp:2025-12-08 19:29:04.238648541 +0000 UTC m=+1.009138815,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.830750 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f543009e1d1d8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:04.253481432 +0000 UTC m=+1.023971706,LastTimestamp:2025-12-08 19:29:04.253481432 +0000 UTC m=+1.023971706,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.835667 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54300a9458bc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:04.265181372 +0000 UTC m=+1.035671656,LastTimestamp:2025-12-08 19:29:04.265181372 +0000 UTC m=+1.035671656,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.839955 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f54300c16bc8d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:04.290503821 +0000 UTC m=+1.060994115,LastTimestamp:2025-12-08 19:29:04.290503821 +0000 UTC m=+1.060994115,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.846648 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f54300c7eac8e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:04.29731547 +0000 UTC m=+1.067805744,LastTimestamp:2025-12-08 19:29:04.29731547 +0000 UTC m=+1.067805744,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.851983 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f54302680590b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:04.733632779 +0000 UTC m=+1.504123053,LastTimestamp:2025-12-08 19:29:04.733632779 +0000 UTC m=+1.504123053,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.856239 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f54302681e4ae openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:04.733734062 +0000 UTC m=+1.504224336,LastTimestamp:2025-12-08 19:29:04.733734062 +0000 UTC m=+1.504224336,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.860197 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f54302682473d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:04.733759293 +0000 UTC m=+1.504249567,LastTimestamp:2025-12-08 19:29:04.733759293 +0000 UTC m=+1.504249567,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.865006 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f543026837c4f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:04.733838415 +0000 UTC m=+1.504328689,LastTimestamp:2025-12-08 19:29:04.733838415 +0000 UTC m=+1.504328689,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.869196 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f543026a2f804 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:04.7359017 +0000 UTC m=+1.506391974,LastTimestamp:2025-12-08 19:29:04.7359017 +0000 UTC m=+1.506391974,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.874775 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f5430272f71c8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:04.745107912 +0000 UTC m=+1.515598186,LastTimestamp:2025-12-08 19:29:04.745107912 +0000 UTC m=+1.515598186,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: I1208 19:29:22.875152 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 08 19:29:22 crc kubenswrapper[5125]: I1208 19:29:22.876597 5125 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="4afa51403d07d17fada4ad9c4d680fdc6867966b26d0cac2c9848c6e52f8cf76" exitCode=255 Dec 08 19:29:22 crc kubenswrapper[5125]: I1208 19:29:22.876709 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"4afa51403d07d17fada4ad9c4d680fdc6867966b26d0cac2c9848c6e52f8cf76"} Dec 08 19:29:22 crc kubenswrapper[5125]: I1208 19:29:22.876807 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:22 crc kubenswrapper[5125]: I1208 19:29:22.877018 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:22 crc kubenswrapper[5125]: I1208 19:29:22.877381 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:22 crc kubenswrapper[5125]: I1208 19:29:22.877433 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:22 crc kubenswrapper[5125]: I1208 19:29:22.877446 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:22 crc kubenswrapper[5125]: I1208 19:29:22.877475 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:22 crc kubenswrapper[5125]: I1208 19:29:22.877494 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:22 crc kubenswrapper[5125]: I1208 19:29:22.877504 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.877786 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.878116 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:22 crc kubenswrapper[5125]: I1208 19:29:22.878295 5125 scope.go:117] "RemoveContainer" containerID="4afa51403d07d17fada4ad9c4d680fdc6867966b26d0cac2c9848c6e52f8cf76" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.881278 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f5430273fa493 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:04.746169491 +0000 UTC m=+1.516659765,LastTimestamp:2025-12-08 19:29:04.746169491 +0000 UTC m=+1.516659765,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.887139 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f5430274c0591 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:04.746980753 +0000 UTC m=+1.517471027,LastTimestamp:2025-12-08 19:29:04.746980753 +0000 UTC m=+1.517471027,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.891914 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f54302753d30a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:04.747492106 +0000 UTC m=+1.517982380,LastTimestamp:2025-12-08 19:29:04.747492106 +0000 UTC m=+1.517982380,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.899845 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f543027b6f9aa openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:04.753990058 +0000 UTC m=+1.524480332,LastTimestamp:2025-12-08 19:29:04.753990058 +0000 UTC m=+1.524480332,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.907103 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f543027b7338e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:04.754004878 +0000 UTC m=+1.524495152,LastTimestamp:2025-12-08 19:29:04.754004878 +0000 UTC m=+1.524495152,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.911947 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f543028f90fcb openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:04.775098315 +0000 UTC m=+1.545588589,LastTimestamp:2025-12-08 19:29:04.775098315 +0000 UTC m=+1.545588589,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.916410 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f54303d1daee8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:05.113042664 +0000 UTC m=+1.883532928,LastTimestamp:2025-12-08 19:29:05.113042664 +0000 UTC m=+1.883532928,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.922395 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f54304948f0fe openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:05.317204222 +0000 UTC m=+2.087694516,LastTimestamp:2025-12-08 19:29:05.317204222 +0000 UTC m=+2.087694516,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.928477 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f543049ffcc65 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:05.329187941 +0000 UTC m=+2.099678225,LastTimestamp:2025-12-08 19:29:05.329187941 +0000 UTC m=+2.099678225,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.934826 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f54304a135ca6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:05.330470054 +0000 UTC m=+2.100960338,LastTimestamp:2025-12-08 19:29:05.330470054 +0000 UTC m=+2.100960338,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.939532 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f54304a69e5c2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:05.33614125 +0000 UTC m=+2.106631534,LastTimestamp:2025-12-08 19:29:05.33614125 +0000 UTC m=+2.106631534,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.944557 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f54305e3a40f7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:05.668563191 +0000 UTC m=+2.439053465,LastTimestamp:2025-12-08 19:29:05.668563191 +0000 UTC m=+2.439053465,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.949464 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f54305efd6c67 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:05.681353831 +0000 UTC m=+2.451844105,LastTimestamp:2025-12-08 19:29:05.681353831 +0000 UTC m=+2.451844105,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.953897 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f54305f0deb2b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:05.682434859 +0000 UTC m=+2.452925133,LastTimestamp:2025-12-08 19:29:05.682434859 +0000 UTC m=+2.452925133,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.958478 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5430657b2ae4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:05.790257892 +0000 UTC m=+2.560748176,LastTimestamp:2025-12-08 19:29:05.790257892 +0000 UTC m=+2.560748176,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.964509 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5430659b9117 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:05.792381207 +0000 UTC m=+2.562871491,LastTimestamp:2025-12-08 19:29:05.792381207 +0000 UTC m=+2.562871491,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.969946 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f5430661c0970 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:05.800800624 +0000 UTC m=+2.571290918,LastTimestamp:2025-12-08 19:29:05.800800624 +0000 UTC m=+2.571290918,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.974823 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f54306ebc6936 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:05.94552863 +0000 UTC m=+2.716018904,LastTimestamp:2025-12-08 19:29:05.94552863 +0000 UTC m=+2.716018904,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.978715 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f543070097917 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:05.967356183 +0000 UTC m=+2.737846457,LastTimestamp:2025-12-08 19:29:05.967356183 +0000 UTC m=+2.737846457,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.983148 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5430737d37dc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:06.025273308 +0000 UTC m=+2.795763582,LastTimestamp:2025-12-08 19:29:06.025273308 +0000 UTC m=+2.795763582,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.988070 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f543073868a0e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:06.025884174 +0000 UTC m=+2.796374448,LastTimestamp:2025-12-08 19:29:06.025884174 +0000 UTC m=+2.796374448,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:22 crc kubenswrapper[5125]: E1208 19:29:22.993224 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f543073f908f7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:06.033387767 +0000 UTC m=+2.803878041,LastTimestamp:2025-12-08 19:29:06.033387767 +0000 UTC m=+2.803878041,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.000525 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f543074545acc openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:06.039372492 +0000 UTC m=+2.809862756,LastTimestamp:2025-12-08 19:29:06.039372492 +0000 UTC m=+2.809862756,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.006791 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f5430746dae49 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:06.041032265 +0000 UTC m=+2.811522539,LastTimestamp:2025-12-08 19:29:06.041032265 +0000 UTC m=+2.811522539,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.013489 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54307481f7c7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:06.042361799 +0000 UTC m=+2.812852073,LastTimestamp:2025-12-08 19:29:06.042361799 +0000 UTC m=+2.812852073,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.017735 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5430748d6ec6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:06.043113158 +0000 UTC m=+2.813603432,LastTimestamp:2025-12-08 19:29:06.043113158 +0000 UTC m=+2.813603432,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.022141 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f543074f5012c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:06.049900844 +0000 UTC m=+2.820391118,LastTimestamp:2025-12-08 19:29:06.049900844 +0000 UTC m=+2.820391118,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.026839 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54307f430429 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:06.222785577 +0000 UTC m=+2.993275851,LastTimestamp:2025-12-08 19:29:06.222785577 +0000 UTC m=+2.993275851,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.030785 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f54307f431fd1 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:06.222792657 +0000 UTC m=+2.993282931,LastTimestamp:2025-12-08 19:29:06.222792657 +0000 UTC m=+2.993282931,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.035094 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54307fea4890 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:06.2337476 +0000 UTC m=+3.004237874,LastTimestamp:2025-12-08 19:29:06.2337476 +0000 UTC m=+3.004237874,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.039880 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54307ffa7086 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:06.234806406 +0000 UTC m=+3.005296690,LastTimestamp:2025-12-08 19:29:06.234806406 +0000 UTC m=+3.005296690,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.045060 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f543080139df7 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:06.236456439 +0000 UTC m=+3.006946713,LastTimestamp:2025-12-08 19:29:06.236456439 +0000 UTC m=+3.006946713,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.050002 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f5430801e4e7b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:06.237156987 +0000 UTC m=+3.007647261,LastTimestamp:2025-12-08 19:29:06.237156987 +0000 UTC m=+3.007647261,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.053960 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54308b2cc574 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:06.422654324 +0000 UTC m=+3.193144608,LastTimestamp:2025-12-08 19:29:06.422654324 +0000 UTC m=+3.193144608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.057499 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f54308b34ad7e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:06.423172478 +0000 UTC m=+3.193662752,LastTimestamp:2025-12-08 19:29:06.423172478 +0000 UTC m=+3.193662752,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.060966 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54308c2e6b36 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:06.43953951 +0000 UTC m=+3.210029784,LastTimestamp:2025-12-08 19:29:06.43953951 +0000 UTC m=+3.210029784,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.065654 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54308c3dac46 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:06.440539206 +0000 UTC m=+3.211029480,LastTimestamp:2025-12-08 19:29:06.440539206 +0000 UTC m=+3.211029480,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.069624 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f54308c5bfa72 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:06.442525298 +0000 UTC m=+3.213015572,LastTimestamp:2025-12-08 19:29:06.442525298 +0000 UTC m=+3.213015572,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.073937 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54309629830d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:06.606990093 +0000 UTC m=+3.377480367,LastTimestamp:2025-12-08 19:29:06.606990093 +0000 UTC m=+3.377480367,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.077378 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5430970f70b0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:06.622058672 +0000 UTC m=+3.392548956,LastTimestamp:2025-12-08 19:29:06.622058672 +0000 UTC m=+3.392548956,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.081239 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5430971e7d22 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:06.623044898 +0000 UTC m=+3.393535182,LastTimestamp:2025-12-08 19:29:06.623044898 +0000 UTC m=+3.393535182,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.087195 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5430a2e9a3ee openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:06.820908014 +0000 UTC m=+3.591398298,LastTimestamp:2025-12-08 19:29:06.820908014 +0000 UTC m=+3.591398298,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.091491 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5430a4bba51d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:06.851448093 +0000 UTC m=+3.621938367,LastTimestamp:2025-12-08 19:29:06.851448093 +0000 UTC m=+3.621938367,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.096811 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5430a57147e4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:06.86335178 +0000 UTC m=+3.633842064,LastTimestamp:2025-12-08 19:29:06.86335178 +0000 UTC m=+3.633842064,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.101577 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5430b061622a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:07.046859306 +0000 UTC m=+3.817349580,LastTimestamp:2025-12-08 19:29:07.046859306 +0000 UTC m=+3.817349580,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.107736 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5430b0f5459e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:07.056551326 +0000 UTC m=+3.827041610,LastTimestamp:2025-12-08 19:29:07.056551326 +0000 UTC m=+3.827041610,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.113564 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5430df6ef44c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:07.836277836 +0000 UTC m=+4.606768110,LastTimestamp:2025-12-08 19:29:07.836277836 +0000 UTC m=+4.606768110,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.117667 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5430ea8ad168 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.022653288 +0000 UTC m=+4.793143572,LastTimestamp:2025-12-08 19:29:08.022653288 +0000 UTC m=+4.793143572,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.122364 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5430eb295c48 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.033043528 +0000 UTC m=+4.803533812,LastTimestamp:2025-12-08 19:29:08.033043528 +0000 UTC m=+4.803533812,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.127638 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5430eb3cfedc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.034330332 +0000 UTC m=+4.804820606,LastTimestamp:2025-12-08 19:29:08.034330332 +0000 UTC m=+4.804820606,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.132817 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5430f49fd1e8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.191801832 +0000 UTC m=+4.962292106,LastTimestamp:2025-12-08 19:29:08.191801832 +0000 UTC m=+4.962292106,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.137698 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5430f543d73e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.202551102 +0000 UTC m=+4.973041386,LastTimestamp:2025-12-08 19:29:08.202551102 +0000 UTC m=+4.973041386,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.142922 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5430f551df5d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.203470685 +0000 UTC m=+4.973960979,LastTimestamp:2025-12-08 19:29:08.203470685 +0000 UTC m=+4.973960979,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.148591 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5430ff312392 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.369097618 +0000 UTC m=+5.139587892,LastTimestamp:2025-12-08 19:29:08.369097618 +0000 UTC m=+5.139587892,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.152983 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5430ffca2b78 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.379126648 +0000 UTC m=+5.149616972,LastTimestamp:2025-12-08 19:29:08.379126648 +0000 UTC m=+5.149616972,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.156862 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5430ffde79f7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.380457463 +0000 UTC m=+5.150947767,LastTimestamp:2025-12-08 19:29:08.380457463 +0000 UTC m=+5.150947767,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.161057 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f54310a93bd0a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.56010881 +0000 UTC m=+5.330599084,LastTimestamp:2025-12-08 19:29:08.56010881 +0000 UTC m=+5.330599084,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.167232 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f54310b3c2fc3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.571148227 +0000 UTC m=+5.341638521,LastTimestamp:2025-12-08 19:29:08.571148227 +0000 UTC m=+5.341638521,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.171852 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f54310b505957 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.572469591 +0000 UTC m=+5.342959865,LastTimestamp:2025-12-08 19:29:08.572469591 +0000 UTC m=+5.342959865,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.176556 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f543115761d79 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.742716793 +0000 UTC m=+5.513207077,LastTimestamp:2025-12-08 19:29:08.742716793 +0000 UTC m=+5.513207077,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.181967 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f54311639fddd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:08.755553757 +0000 UTC m=+5.526044051,LastTimestamp:2025-12-08 19:29:08.755553757 +0000 UTC m=+5.526044051,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.189551 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Dec 08 19:29:23 crc kubenswrapper[5125]: &Event{ObjectMeta:{kube-controller-manager-crc.187f5432a7965353 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Dec 08 19:29:23 crc kubenswrapper[5125]: body: Dec 08 19:29:23 crc kubenswrapper[5125]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:15.489268563 +0000 UTC m=+12.259758887,LastTimestamp:2025-12-08 19:29:15.489268563 +0000 UTC m=+12.259758887,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 19:29:23 crc kubenswrapper[5125]: > Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.194048 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f5432a797daaa openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:15.489368746 +0000 UTC m=+12.259859070,LastTimestamp:2025-12-08 19:29:15.489368746 +0000 UTC m=+12.259859070,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.199467 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 19:29:23 crc kubenswrapper[5125]: &Event{ObjectMeta:{kube-apiserver-crc.187f543329d3c33a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 08 19:29:23 crc kubenswrapper[5125]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 19:29:23 crc kubenswrapper[5125]: Dec 08 19:29:23 crc kubenswrapper[5125]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:17.674332986 +0000 UTC m=+14.444823280,LastTimestamp:2025-12-08 19:29:17.674332986 +0000 UTC m=+14.444823280,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 19:29:23 crc kubenswrapper[5125]: > Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.204803 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f543329d49b5e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:17.674388318 +0000 UTC m=+14.444878612,LastTimestamp:2025-12-08 19:29:17.674388318 +0000 UTC m=+14.444878612,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.209053 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f543329d3c33a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 19:29:23 crc kubenswrapper[5125]: &Event{ObjectMeta:{kube-apiserver-crc.187f543329d3c33a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 08 19:29:23 crc kubenswrapper[5125]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 19:29:23 crc kubenswrapper[5125]: Dec 08 19:29:23 crc kubenswrapper[5125]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:17.674332986 +0000 UTC m=+14.444823280,LastTimestamp:2025-12-08 19:29:17.68064264 +0000 UTC m=+14.451132934,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 19:29:23 crc kubenswrapper[5125]: > Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.213676 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f543329d49b5e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f543329d49b5e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:17.674388318 +0000 UTC m=+14.444878612,LastTimestamp:2025-12-08 19:29:17.680712971 +0000 UTC m=+14.451203255,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.220929 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 19:29:23 crc kubenswrapper[5125]: &Event{ObjectMeta:{kube-apiserver-crc.187f54336fe3ab79 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Dec 08 19:29:23 crc kubenswrapper[5125]: body: [+]ping ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]log ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]etcd ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]poststarthook/generic-apiserver-start-informers ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]poststarthook/priority-and-fairness-filter ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]poststarthook/start-apiextensions-informers ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]poststarthook/start-apiextensions-controllers ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]poststarthook/crd-informer-synced ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]poststarthook/start-system-namespaces-controller ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 08 19:29:23 crc kubenswrapper[5125]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 08 19:29:23 crc kubenswrapper[5125]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]poststarthook/bootstrap-controller ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]poststarthook/start-kubernetes-service-cidr-controller ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]poststarthook/start-kube-aggregator-informers ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]poststarthook/apiservice-registration-controller ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]poststarthook/apiservice-discovery-controller ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]autoregister-completion ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]poststarthook/apiservice-openapi-controller ok Dec 08 19:29:23 crc kubenswrapper[5125]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 08 19:29:23 crc kubenswrapper[5125]: livez check failed Dec 08 19:29:23 crc kubenswrapper[5125]: Dec 08 19:29:23 crc kubenswrapper[5125]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:18.849780601 +0000 UTC m=+15.620270875,LastTimestamp:2025-12-08 19:29:18.849780601 +0000 UTC m=+15.620270875,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 19:29:23 crc kubenswrapper[5125]: > Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.225310 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54336fe45d54 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:18.849826132 +0000 UTC m=+15.620316416,LastTimestamp:2025-12-08 19:29:18.849826132 +0000 UTC m=+15.620316416,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.231597 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 19:29:23 crc kubenswrapper[5125]: &Event{ObjectMeta:{kube-apiserver-crc.187f5434572eec79 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:41570->192.168.126.11:17697: read: connection reset by peer Dec 08 19:29:23 crc kubenswrapper[5125]: body: Dec 08 19:29:23 crc kubenswrapper[5125]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:22.730249337 +0000 UTC m=+19.500739621,LastTimestamp:2025-12-08 19:29:22.730249337 +0000 UTC m=+19.500739621,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 19:29:23 crc kubenswrapper[5125]: > Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.236730 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5434572fe117 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:41570->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:22.730311959 +0000 UTC m=+19.500802253,LastTimestamp:2025-12-08 19:29:22.730311959 +0000 UTC m=+19.500802253,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.241933 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 19:29:23 crc kubenswrapper[5125]: &Event{ObjectMeta:{kube-apiserver-crc.187f54345732995b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:41574->192.168.126.11:17697: read: connection reset by peer Dec 08 19:29:23 crc kubenswrapper[5125]: body: Dec 08 19:29:23 crc kubenswrapper[5125]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:22.730490203 +0000 UTC m=+19.500980487,LastTimestamp:2025-12-08 19:29:22.730490203 +0000 UTC m=+19.500980487,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 19:29:23 crc kubenswrapper[5125]: > Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.247235 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f54345733f1c8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:41574->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:22.730578376 +0000 UTC m=+19.501068670,LastTimestamp:2025-12-08 19:29:22.730578376 +0000 UTC m=+19.501068670,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.254355 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f5430971e7d22\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5430971e7d22 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:06.623044898 +0000 UTC m=+3.393535182,LastTimestamp:2025-12-08 19:29:22.879372511 +0000 UTC m=+19.649862815,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.263595 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f5430a4bba51d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5430a4bba51d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:06.851448093 +0000 UTC m=+3.621938367,LastTimestamp:2025-12-08 19:29:23.081259375 +0000 UTC m=+19.851749649,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.268463 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f5430a57147e4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5430a57147e4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:06.86335178 +0000 UTC m=+3.633842064,LastTimestamp:2025-12-08 19:29:23.095388422 +0000 UTC m=+19.865878696,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:23 crc kubenswrapper[5125]: I1208 19:29:23.543097 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 08 19:29:23 crc kubenswrapper[5125]: I1208 19:29:23.543286 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:23 crc kubenswrapper[5125]: I1208 19:29:23.544063 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:23 crc kubenswrapper[5125]: I1208 19:29:23.544092 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:23 crc kubenswrapper[5125]: I1208 19:29:23.544102 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.544412 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:23 crc kubenswrapper[5125]: I1208 19:29:23.564978 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 08 19:29:23 crc kubenswrapper[5125]: I1208 19:29:23.671781 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.796155 5125 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 19:29:23 crc kubenswrapper[5125]: I1208 19:29:23.850419 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:23 crc kubenswrapper[5125]: I1208 19:29:23.880596 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 08 19:29:23 crc kubenswrapper[5125]: I1208 19:29:23.882101 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"4656d614bd897c28c0c91b5567aa2c471c10494205169665dff9eb77a8dec850"} Dec 08 19:29:23 crc kubenswrapper[5125]: I1208 19:29:23.882219 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:23 crc kubenswrapper[5125]: I1208 19:29:23.882242 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:23 crc kubenswrapper[5125]: I1208 19:29:23.882863 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:23 crc kubenswrapper[5125]: I1208 19:29:23.882896 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:23 crc kubenswrapper[5125]: I1208 19:29:23.882926 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:23 crc kubenswrapper[5125]: I1208 19:29:23.882936 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:23 crc kubenswrapper[5125]: I1208 19:29:23.882903 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:23 crc kubenswrapper[5125]: I1208 19:29:23.882990 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.883184 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:23 crc kubenswrapper[5125]: E1208 19:29:23.883419 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:23 crc kubenswrapper[5125]: I1208 19:29:23.886953 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:24 crc kubenswrapper[5125]: I1208 19:29:24.673075 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:24 crc kubenswrapper[5125]: I1208 19:29:24.885798 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 19:29:24 crc kubenswrapper[5125]: I1208 19:29:24.886203 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 08 19:29:24 crc kubenswrapper[5125]: I1208 19:29:24.888168 5125 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="4656d614bd897c28c0c91b5567aa2c471c10494205169665dff9eb77a8dec850" exitCode=255 Dec 08 19:29:24 crc kubenswrapper[5125]: I1208 19:29:24.888230 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"4656d614bd897c28c0c91b5567aa2c471c10494205169665dff9eb77a8dec850"} Dec 08 19:29:24 crc kubenswrapper[5125]: I1208 19:29:24.888285 5125 scope.go:117] "RemoveContainer" containerID="4afa51403d07d17fada4ad9c4d680fdc6867966b26d0cac2c9848c6e52f8cf76" Dec 08 19:29:24 crc kubenswrapper[5125]: I1208 19:29:24.888456 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:24 crc kubenswrapper[5125]: I1208 19:29:24.889136 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:24 crc kubenswrapper[5125]: I1208 19:29:24.889174 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:24 crc kubenswrapper[5125]: I1208 19:29:24.889187 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:24 crc kubenswrapper[5125]: E1208 19:29:24.889668 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:24 crc kubenswrapper[5125]: I1208 19:29:24.889962 5125 scope.go:117] "RemoveContainer" containerID="4656d614bd897c28c0c91b5567aa2c471c10494205169665dff9eb77a8dec850" Dec 08 19:29:24 crc kubenswrapper[5125]: E1208 19:29:24.890211 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:29:24 crc kubenswrapper[5125]: E1208 19:29:24.896508 5125 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5434d7ec6afc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:24.890151676 +0000 UTC m=+21.660641950,LastTimestamp:2025-12-08 19:29:24.890151676 +0000 UTC m=+21.660641950,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:25 crc kubenswrapper[5125]: I1208 19:29:25.676089 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:25 crc kubenswrapper[5125]: I1208 19:29:25.894788 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 19:29:25 crc kubenswrapper[5125]: I1208 19:29:25.897699 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:25 crc kubenswrapper[5125]: I1208 19:29:25.898660 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:25 crc kubenswrapper[5125]: I1208 19:29:25.898859 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:25 crc kubenswrapper[5125]: I1208 19:29:25.898995 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:25 crc kubenswrapper[5125]: E1208 19:29:25.899640 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:25 crc kubenswrapper[5125]: I1208 19:29:25.900215 5125 scope.go:117] "RemoveContainer" containerID="4656d614bd897c28c0c91b5567aa2c471c10494205169665dff9eb77a8dec850" Dec 08 19:29:25 crc kubenswrapper[5125]: E1208 19:29:25.900723 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:29:25 crc kubenswrapper[5125]: E1208 19:29:25.908867 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f5434d7ec6afc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5434d7ec6afc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:24.890151676 +0000 UTC m=+21.660641950,LastTimestamp:2025-12-08 19:29:25.900667287 +0000 UTC m=+22.671157601,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:26 crc kubenswrapper[5125]: E1208 19:29:26.293150 5125 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 19:29:26 crc kubenswrapper[5125]: I1208 19:29:26.674967 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:27 crc kubenswrapper[5125]: I1208 19:29:27.674960 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:28 crc kubenswrapper[5125]: I1208 19:29:28.673670 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:29 crc kubenswrapper[5125]: I1208 19:29:29.095248 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:29 crc kubenswrapper[5125]: I1208 19:29:29.096407 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:29 crc kubenswrapper[5125]: I1208 19:29:29.096507 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:29 crc kubenswrapper[5125]: I1208 19:29:29.096530 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:29 crc kubenswrapper[5125]: I1208 19:29:29.096568 5125 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:29:29 crc kubenswrapper[5125]: E1208 19:29:29.111870 5125 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 19:29:29 crc kubenswrapper[5125]: I1208 19:29:29.676567 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:30 crc kubenswrapper[5125]: I1208 19:29:30.674587 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:30 crc kubenswrapper[5125]: E1208 19:29:30.830322 5125 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 19:29:31 crc kubenswrapper[5125]: I1208 19:29:31.676515 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:32 crc kubenswrapper[5125]: I1208 19:29:32.001869 5125 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:32 crc kubenswrapper[5125]: I1208 19:29:32.002189 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:32 crc kubenswrapper[5125]: I1208 19:29:32.003231 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:32 crc kubenswrapper[5125]: I1208 19:29:32.003350 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:32 crc kubenswrapper[5125]: I1208 19:29:32.003380 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:32 crc kubenswrapper[5125]: E1208 19:29:32.004046 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:32 crc kubenswrapper[5125]: I1208 19:29:32.004583 5125 scope.go:117] "RemoveContainer" containerID="4656d614bd897c28c0c91b5567aa2c471c10494205169665dff9eb77a8dec850" Dec 08 19:29:32 crc kubenswrapper[5125]: E1208 19:29:32.005072 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:29:32 crc kubenswrapper[5125]: E1208 19:29:32.012031 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f5434d7ec6afc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5434d7ec6afc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:24.890151676 +0000 UTC m=+21.660641950,LastTimestamp:2025-12-08 19:29:32.004991553 +0000 UTC m=+28.775481867,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:32 crc kubenswrapper[5125]: E1208 19:29:32.489576 5125 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 19:29:32 crc kubenswrapper[5125]: I1208 19:29:32.677067 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:33 crc kubenswrapper[5125]: E1208 19:29:33.234947 5125 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 19:29:33 crc kubenswrapper[5125]: E1208 19:29:33.302178 5125 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 19:29:33 crc kubenswrapper[5125]: E1208 19:29:33.642934 5125 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 19:29:33 crc kubenswrapper[5125]: I1208 19:29:33.675430 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:33 crc kubenswrapper[5125]: E1208 19:29:33.797798 5125 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 19:29:33 crc kubenswrapper[5125]: I1208 19:29:33.882929 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:33 crc kubenswrapper[5125]: I1208 19:29:33.883310 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:33 crc kubenswrapper[5125]: I1208 19:29:33.884434 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:33 crc kubenswrapper[5125]: I1208 19:29:33.884496 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:33 crc kubenswrapper[5125]: I1208 19:29:33.884523 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:33 crc kubenswrapper[5125]: E1208 19:29:33.885237 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:33 crc kubenswrapper[5125]: I1208 19:29:33.885715 5125 scope.go:117] "RemoveContainer" containerID="4656d614bd897c28c0c91b5567aa2c471c10494205169665dff9eb77a8dec850" Dec 08 19:29:33 crc kubenswrapper[5125]: E1208 19:29:33.886051 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:29:33 crc kubenswrapper[5125]: E1208 19:29:33.896170 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f5434d7ec6afc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5434d7ec6afc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:24.890151676 +0000 UTC m=+21.660641950,LastTimestamp:2025-12-08 19:29:33.885987977 +0000 UTC m=+30.656478281,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:34 crc kubenswrapper[5125]: I1208 19:29:34.675430 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:35 crc kubenswrapper[5125]: I1208 19:29:35.677202 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:36 crc kubenswrapper[5125]: I1208 19:29:36.112176 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:36 crc kubenswrapper[5125]: I1208 19:29:36.113554 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:36 crc kubenswrapper[5125]: I1208 19:29:36.113873 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:36 crc kubenswrapper[5125]: I1208 19:29:36.114068 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:36 crc kubenswrapper[5125]: I1208 19:29:36.114272 5125 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:29:36 crc kubenswrapper[5125]: E1208 19:29:36.130181 5125 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 19:29:36 crc kubenswrapper[5125]: I1208 19:29:36.672054 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:37 crc kubenswrapper[5125]: I1208 19:29:37.675111 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:38 crc kubenswrapper[5125]: I1208 19:29:38.675488 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:39 crc kubenswrapper[5125]: I1208 19:29:39.677233 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:40 crc kubenswrapper[5125]: E1208 19:29:40.311822 5125 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 19:29:40 crc kubenswrapper[5125]: I1208 19:29:40.676732 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:41 crc kubenswrapper[5125]: I1208 19:29:41.676046 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:42 crc kubenswrapper[5125]: I1208 19:29:42.676008 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:43 crc kubenswrapper[5125]: I1208 19:29:43.131059 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:43 crc kubenswrapper[5125]: I1208 19:29:43.132398 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:43 crc kubenswrapper[5125]: I1208 19:29:43.132459 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:43 crc kubenswrapper[5125]: I1208 19:29:43.132476 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:43 crc kubenswrapper[5125]: I1208 19:29:43.132507 5125 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:29:43 crc kubenswrapper[5125]: E1208 19:29:43.141804 5125 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 19:29:43 crc kubenswrapper[5125]: I1208 19:29:43.676292 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:43 crc kubenswrapper[5125]: E1208 19:29:43.798233 5125 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 19:29:44 crc kubenswrapper[5125]: I1208 19:29:44.676999 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:45 crc kubenswrapper[5125]: I1208 19:29:45.677237 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:46 crc kubenswrapper[5125]: I1208 19:29:46.678170 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:47 crc kubenswrapper[5125]: E1208 19:29:47.315763 5125 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 19:29:47 crc kubenswrapper[5125]: I1208 19:29:47.677022 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:47 crc kubenswrapper[5125]: I1208 19:29:47.767261 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:47 crc kubenswrapper[5125]: I1208 19:29:47.768449 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:47 crc kubenswrapper[5125]: I1208 19:29:47.768545 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:47 crc kubenswrapper[5125]: I1208 19:29:47.768575 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:47 crc kubenswrapper[5125]: E1208 19:29:47.769300 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:47 crc kubenswrapper[5125]: I1208 19:29:47.769765 5125 scope.go:117] "RemoveContainer" containerID="4656d614bd897c28c0c91b5567aa2c471c10494205169665dff9eb77a8dec850" Dec 08 19:29:47 crc kubenswrapper[5125]: E1208 19:29:47.780093 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f5430971e7d22\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5430971e7d22 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:06.623044898 +0000 UTC m=+3.393535182,LastTimestamp:2025-12-08 19:29:47.771031817 +0000 UTC m=+44.541522131,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:48 crc kubenswrapper[5125]: E1208 19:29:48.028686 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f5430a4bba51d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5430a4bba51d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:06.851448093 +0000 UTC m=+3.621938367,LastTimestamp:2025-12-08 19:29:48.020932199 +0000 UTC m=+44.791422473,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:48 crc kubenswrapper[5125]: E1208 19:29:48.043031 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f5430a57147e4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5430a57147e4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:06.86335178 +0000 UTC m=+3.633842064,LastTimestamp:2025-12-08 19:29:48.036140114 +0000 UTC m=+44.806630398,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:48 crc kubenswrapper[5125]: I1208 19:29:48.676465 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:48 crc kubenswrapper[5125]: I1208 19:29:48.958295 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 19:29:48 crc kubenswrapper[5125]: I1208 19:29:48.960593 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"b9d9bac85a6057988a70f2be2ef985ad9803dab3543e92a8e87813f668f16eea"} Dec 08 19:29:48 crc kubenswrapper[5125]: I1208 19:29:48.960823 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:48 crc kubenswrapper[5125]: I1208 19:29:48.961704 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:48 crc kubenswrapper[5125]: I1208 19:29:48.961760 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:48 crc kubenswrapper[5125]: I1208 19:29:48.961778 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:48 crc kubenswrapper[5125]: E1208 19:29:48.962257 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:49 crc kubenswrapper[5125]: I1208 19:29:49.677784 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:49 crc kubenswrapper[5125]: I1208 19:29:49.966554 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 19:29:49 crc kubenswrapper[5125]: I1208 19:29:49.967098 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 19:29:49 crc kubenswrapper[5125]: I1208 19:29:49.969400 5125 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="b9d9bac85a6057988a70f2be2ef985ad9803dab3543e92a8e87813f668f16eea" exitCode=255 Dec 08 19:29:49 crc kubenswrapper[5125]: I1208 19:29:49.969462 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"b9d9bac85a6057988a70f2be2ef985ad9803dab3543e92a8e87813f668f16eea"} Dec 08 19:29:49 crc kubenswrapper[5125]: I1208 19:29:49.969557 5125 scope.go:117] "RemoveContainer" containerID="4656d614bd897c28c0c91b5567aa2c471c10494205169665dff9eb77a8dec850" Dec 08 19:29:49 crc kubenswrapper[5125]: I1208 19:29:49.969765 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:49 crc kubenswrapper[5125]: I1208 19:29:49.970557 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:49 crc kubenswrapper[5125]: I1208 19:29:49.970589 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:49 crc kubenswrapper[5125]: I1208 19:29:49.970599 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:49 crc kubenswrapper[5125]: E1208 19:29:49.970876 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:49 crc kubenswrapper[5125]: I1208 19:29:49.971091 5125 scope.go:117] "RemoveContainer" containerID="b9d9bac85a6057988a70f2be2ef985ad9803dab3543e92a8e87813f668f16eea" Dec 08 19:29:49 crc kubenswrapper[5125]: E1208 19:29:49.971276 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:29:49 crc kubenswrapper[5125]: E1208 19:29:49.980492 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f5434d7ec6afc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5434d7ec6afc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:24.890151676 +0000 UTC m=+21.660641950,LastTimestamp:2025-12-08 19:29:49.971254473 +0000 UTC m=+46.741744737,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:50 crc kubenswrapper[5125]: I1208 19:29:50.142801 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:50 crc kubenswrapper[5125]: I1208 19:29:50.144720 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:50 crc kubenswrapper[5125]: I1208 19:29:50.144801 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:50 crc kubenswrapper[5125]: I1208 19:29:50.144829 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:50 crc kubenswrapper[5125]: I1208 19:29:50.144872 5125 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:29:50 crc kubenswrapper[5125]: E1208 19:29:50.159207 5125 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 19:29:50 crc kubenswrapper[5125]: I1208 19:29:50.675105 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:50 crc kubenswrapper[5125]: I1208 19:29:50.974856 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 19:29:51 crc kubenswrapper[5125]: E1208 19:29:51.224128 5125 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 19:29:51 crc kubenswrapper[5125]: I1208 19:29:51.676291 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:52 crc kubenswrapper[5125]: I1208 19:29:52.002122 5125 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:52 crc kubenswrapper[5125]: I1208 19:29:52.002464 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:52 crc kubenswrapper[5125]: I1208 19:29:52.003444 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:52 crc kubenswrapper[5125]: I1208 19:29:52.003513 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:52 crc kubenswrapper[5125]: I1208 19:29:52.003537 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:52 crc kubenswrapper[5125]: E1208 19:29:52.004195 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:52 crc kubenswrapper[5125]: I1208 19:29:52.004635 5125 scope.go:117] "RemoveContainer" containerID="b9d9bac85a6057988a70f2be2ef985ad9803dab3543e92a8e87813f668f16eea" Dec 08 19:29:52 crc kubenswrapper[5125]: E1208 19:29:52.004976 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:29:52 crc kubenswrapper[5125]: E1208 19:29:52.012669 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f5434d7ec6afc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5434d7ec6afc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:24.890151676 +0000 UTC m=+21.660641950,LastTimestamp:2025-12-08 19:29:52.004920803 +0000 UTC m=+48.775411107,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:52 crc kubenswrapper[5125]: E1208 19:29:52.098430 5125 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 19:29:52 crc kubenswrapper[5125]: E1208 19:29:52.187667 5125 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 19:29:52 crc kubenswrapper[5125]: I1208 19:29:52.675178 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:53 crc kubenswrapper[5125]: I1208 19:29:53.438865 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:29:53 crc kubenswrapper[5125]: I1208 19:29:53.439142 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:53 crc kubenswrapper[5125]: I1208 19:29:53.440402 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:53 crc kubenswrapper[5125]: I1208 19:29:53.440534 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:53 crc kubenswrapper[5125]: I1208 19:29:53.440583 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:53 crc kubenswrapper[5125]: E1208 19:29:53.441091 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:53 crc kubenswrapper[5125]: I1208 19:29:53.676170 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:53 crc kubenswrapper[5125]: E1208 19:29:53.798805 5125 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 19:29:54 crc kubenswrapper[5125]: E1208 19:29:54.053075 5125 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 19:29:54 crc kubenswrapper[5125]: E1208 19:29:54.323285 5125 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 19:29:54 crc kubenswrapper[5125]: I1208 19:29:54.676198 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:55 crc kubenswrapper[5125]: I1208 19:29:55.676134 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:56 crc kubenswrapper[5125]: I1208 19:29:56.672845 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:57 crc kubenswrapper[5125]: I1208 19:29:57.160279 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:57 crc kubenswrapper[5125]: I1208 19:29:57.161366 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:57 crc kubenswrapper[5125]: I1208 19:29:57.161448 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:57 crc kubenswrapper[5125]: I1208 19:29:57.161475 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:57 crc kubenswrapper[5125]: I1208 19:29:57.161521 5125 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:29:57 crc kubenswrapper[5125]: E1208 19:29:57.174850 5125 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 19:29:57 crc kubenswrapper[5125]: I1208 19:29:57.671594 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:58 crc kubenswrapper[5125]: I1208 19:29:58.676186 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:29:58 crc kubenswrapper[5125]: I1208 19:29:58.961023 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:29:58 crc kubenswrapper[5125]: I1208 19:29:58.961314 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:29:58 crc kubenswrapper[5125]: I1208 19:29:58.962279 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:29:58 crc kubenswrapper[5125]: I1208 19:29:58.962321 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:29:58 crc kubenswrapper[5125]: I1208 19:29:58.962330 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:29:58 crc kubenswrapper[5125]: E1208 19:29:58.963050 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:29:58 crc kubenswrapper[5125]: I1208 19:29:58.963320 5125 scope.go:117] "RemoveContainer" containerID="b9d9bac85a6057988a70f2be2ef985ad9803dab3543e92a8e87813f668f16eea" Dec 08 19:29:58 crc kubenswrapper[5125]: E1208 19:29:58.963510 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:29:58 crc kubenswrapper[5125]: E1208 19:29:58.968554 5125 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f5434d7ec6afc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5434d7ec6afc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:29:24.890151676 +0000 UTC m=+21.660641950,LastTimestamp:2025-12-08 19:29:58.963482449 +0000 UTC m=+55.733972723,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:29:59 crc kubenswrapper[5125]: I1208 19:29:59.676880 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:00 crc kubenswrapper[5125]: I1208 19:30:00.675291 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:01 crc kubenswrapper[5125]: E1208 19:30:01.328801 5125 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 19:30:01 crc kubenswrapper[5125]: I1208 19:30:01.673742 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:02 crc kubenswrapper[5125]: I1208 19:30:02.675005 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:03 crc kubenswrapper[5125]: I1208 19:30:03.678408 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:03 crc kubenswrapper[5125]: E1208 19:30:03.799504 5125 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 19:30:04 crc kubenswrapper[5125]: I1208 19:30:04.175529 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:30:04 crc kubenswrapper[5125]: I1208 19:30:04.176902 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:04 crc kubenswrapper[5125]: I1208 19:30:04.176941 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:04 crc kubenswrapper[5125]: I1208 19:30:04.176952 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:04 crc kubenswrapper[5125]: I1208 19:30:04.176976 5125 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:30:04 crc kubenswrapper[5125]: E1208 19:30:04.187639 5125 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 19:30:04 crc kubenswrapper[5125]: I1208 19:30:04.673427 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:05 crc kubenswrapper[5125]: I1208 19:30:05.674410 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:06 crc kubenswrapper[5125]: I1208 19:30:06.675476 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:07 crc kubenswrapper[5125]: I1208 19:30:07.679681 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:08 crc kubenswrapper[5125]: E1208 19:30:08.337757 5125 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 19:30:08 crc kubenswrapper[5125]: I1208 19:30:08.675226 5125 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 19:30:08 crc kubenswrapper[5125]: I1208 19:30:08.707147 5125 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-xk5q6" Dec 08 19:30:08 crc kubenswrapper[5125]: I1208 19:30:08.713402 5125 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-xk5q6" Dec 08 19:30:08 crc kubenswrapper[5125]: I1208 19:30:08.823367 5125 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 08 19:30:09 crc kubenswrapper[5125]: I1208 19:30:09.568331 5125 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 08 19:30:09 crc kubenswrapper[5125]: I1208 19:30:09.715171 5125 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-01-07 19:25:08 +0000 UTC" deadline="2026-01-02 22:31:40.667117864 +0000 UTC" Dec 08 19:30:09 crc kubenswrapper[5125]: I1208 19:30:09.715213 5125 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="603h1m30.951909262s" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.188339 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.189410 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.189487 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.189513 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.189705 5125 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.200373 5125 kubelet_node_status.go:127] "Node was previously registered" node="crc" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.200599 5125 kubelet_node_status.go:81] "Successfully registered node" node="crc" Dec 08 19:30:11 crc kubenswrapper[5125]: E1208 19:30:11.200635 5125 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.203575 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.203677 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.203700 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.203731 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.203768 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:11Z","lastTransitionTime":"2025-12-08T19:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:11 crc kubenswrapper[5125]: E1208 19:30:11.225822 5125 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc970274-9f45-4e00-af2e-908ff2f74194\\\",\\\"systemUUID\\\":\\\"3204b44a-5260-4c04-b0d1-92575bcb7d69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.236309 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.236375 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.236401 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.236432 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.236456 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:11Z","lastTransitionTime":"2025-12-08T19:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:11 crc kubenswrapper[5125]: E1208 19:30:11.254174 5125 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc970274-9f45-4e00-af2e-908ff2f74194\\\",\\\"systemUUID\\\":\\\"3204b44a-5260-4c04-b0d1-92575bcb7d69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.263933 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.263971 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.263997 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.264014 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.264023 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:11Z","lastTransitionTime":"2025-12-08T19:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:11 crc kubenswrapper[5125]: E1208 19:30:11.277533 5125 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc970274-9f45-4e00-af2e-908ff2f74194\\\",\\\"systemUUID\\\":\\\"3204b44a-5260-4c04-b0d1-92575bcb7d69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.287481 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.287531 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.287546 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.287561 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.287574 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:11Z","lastTransitionTime":"2025-12-08T19:30:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:11 crc kubenswrapper[5125]: E1208 19:30:11.300179 5125 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc970274-9f45-4e00-af2e-908ff2f74194\\\",\\\"systemUUID\\\":\\\"3204b44a-5260-4c04-b0d1-92575bcb7d69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:11 crc kubenswrapper[5125]: E1208 19:30:11.300317 5125 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 19:30:11 crc kubenswrapper[5125]: E1208 19:30:11.300345 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:11 crc kubenswrapper[5125]: E1208 19:30:11.401448 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:11 crc kubenswrapper[5125]: E1208 19:30:11.502500 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:11 crc kubenswrapper[5125]: E1208 19:30:11.603507 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:11 crc kubenswrapper[5125]: E1208 19:30:11.704367 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.766582 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.766773 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.767583 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.767626 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.767639 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.767782 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.767845 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.767870 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:11 crc kubenswrapper[5125]: E1208 19:30:11.768289 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:30:11 crc kubenswrapper[5125]: I1208 19:30:11.768549 5125 scope.go:117] "RemoveContainer" containerID="b9d9bac85a6057988a70f2be2ef985ad9803dab3543e92a8e87813f668f16eea" Dec 08 19:30:11 crc kubenswrapper[5125]: E1208 19:30:11.768651 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:30:11 crc kubenswrapper[5125]: E1208 19:30:11.805425 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:11 crc kubenswrapper[5125]: E1208 19:30:11.906073 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:12 crc kubenswrapper[5125]: E1208 19:30:12.006965 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:12 crc kubenswrapper[5125]: I1208 19:30:12.033219 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 19:30:12 crc kubenswrapper[5125]: I1208 19:30:12.034697 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af"} Dec 08 19:30:12 crc kubenswrapper[5125]: I1208 19:30:12.034912 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:30:12 crc kubenswrapper[5125]: I1208 19:30:12.035441 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:12 crc kubenswrapper[5125]: I1208 19:30:12.035482 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:12 crc kubenswrapper[5125]: I1208 19:30:12.035492 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:12 crc kubenswrapper[5125]: E1208 19:30:12.035912 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:30:12 crc kubenswrapper[5125]: E1208 19:30:12.107083 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:12 crc kubenswrapper[5125]: E1208 19:30:12.207717 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:12 crc kubenswrapper[5125]: E1208 19:30:12.308278 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:12 crc kubenswrapper[5125]: E1208 19:30:12.409250 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:12 crc kubenswrapper[5125]: E1208 19:30:12.509535 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:12 crc kubenswrapper[5125]: E1208 19:30:12.609927 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:12 crc kubenswrapper[5125]: E1208 19:30:12.710023 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:12 crc kubenswrapper[5125]: E1208 19:30:12.810317 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:12 crc kubenswrapper[5125]: E1208 19:30:12.911341 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:13 crc kubenswrapper[5125]: E1208 19:30:13.011900 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:13 crc kubenswrapper[5125]: E1208 19:30:13.112121 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:13 crc kubenswrapper[5125]: E1208 19:30:13.212248 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:13 crc kubenswrapper[5125]: E1208 19:30:13.313254 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:13 crc kubenswrapper[5125]: E1208 19:30:13.414170 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:13 crc kubenswrapper[5125]: E1208 19:30:13.515078 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:13 crc kubenswrapper[5125]: E1208 19:30:13.615459 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:13 crc kubenswrapper[5125]: E1208 19:30:13.716127 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:13 crc kubenswrapper[5125]: E1208 19:30:13.800679 5125 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 19:30:13 crc kubenswrapper[5125]: E1208 19:30:13.816648 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:13 crc kubenswrapper[5125]: E1208 19:30:13.917638 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:14 crc kubenswrapper[5125]: E1208 19:30:14.018667 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:14 crc kubenswrapper[5125]: I1208 19:30:14.039923 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 19:30:14 crc kubenswrapper[5125]: I1208 19:30:14.040346 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 19:30:14 crc kubenswrapper[5125]: I1208 19:30:14.042092 5125 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af" exitCode=255 Dec 08 19:30:14 crc kubenswrapper[5125]: I1208 19:30:14.042134 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af"} Dec 08 19:30:14 crc kubenswrapper[5125]: I1208 19:30:14.042169 5125 scope.go:117] "RemoveContainer" containerID="b9d9bac85a6057988a70f2be2ef985ad9803dab3543e92a8e87813f668f16eea" Dec 08 19:30:14 crc kubenswrapper[5125]: I1208 19:30:14.042497 5125 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 19:30:14 crc kubenswrapper[5125]: I1208 19:30:14.043382 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:14 crc kubenswrapper[5125]: I1208 19:30:14.043416 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:14 crc kubenswrapper[5125]: I1208 19:30:14.043455 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:14 crc kubenswrapper[5125]: E1208 19:30:14.043934 5125 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 19:30:14 crc kubenswrapper[5125]: I1208 19:30:14.044174 5125 scope.go:117] "RemoveContainer" containerID="346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af" Dec 08 19:30:14 crc kubenswrapper[5125]: E1208 19:30:14.044341 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:30:14 crc kubenswrapper[5125]: E1208 19:30:14.119561 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:14 crc kubenswrapper[5125]: E1208 19:30:14.220596 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:14 crc kubenswrapper[5125]: E1208 19:30:14.321451 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:14 crc kubenswrapper[5125]: E1208 19:30:14.422395 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:14 crc kubenswrapper[5125]: E1208 19:30:14.523233 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:14 crc kubenswrapper[5125]: E1208 19:30:14.624297 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:14 crc kubenswrapper[5125]: E1208 19:30:14.724597 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:14 crc kubenswrapper[5125]: E1208 19:30:14.825515 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:14 crc kubenswrapper[5125]: E1208 19:30:14.926144 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:15 crc kubenswrapper[5125]: E1208 19:30:15.026273 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:15 crc kubenswrapper[5125]: I1208 19:30:15.045687 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 19:30:15 crc kubenswrapper[5125]: E1208 19:30:15.126534 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:15 crc kubenswrapper[5125]: E1208 19:30:15.226652 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:15 crc kubenswrapper[5125]: E1208 19:30:15.327799 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:15 crc kubenswrapper[5125]: E1208 19:30:15.428600 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:15 crc kubenswrapper[5125]: E1208 19:30:15.529309 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:15 crc kubenswrapper[5125]: E1208 19:30:15.630371 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:15 crc kubenswrapper[5125]: E1208 19:30:15.731200 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:15 crc kubenswrapper[5125]: E1208 19:30:15.832230 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:15 crc kubenswrapper[5125]: E1208 19:30:15.933359 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:16 crc kubenswrapper[5125]: E1208 19:30:16.033677 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:16 crc kubenswrapper[5125]: E1208 19:30:16.134597 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:16 crc kubenswrapper[5125]: E1208 19:30:16.234882 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:16 crc kubenswrapper[5125]: E1208 19:30:16.335927 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:16 crc kubenswrapper[5125]: E1208 19:30:16.436155 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:16 crc kubenswrapper[5125]: E1208 19:30:16.537237 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:16 crc kubenswrapper[5125]: E1208 19:30:16.638245 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:16 crc kubenswrapper[5125]: E1208 19:30:16.738847 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:16 crc kubenswrapper[5125]: E1208 19:30:16.839523 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:16 crc kubenswrapper[5125]: E1208 19:30:16.940082 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:17 crc kubenswrapper[5125]: E1208 19:30:17.040306 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:17 crc kubenswrapper[5125]: E1208 19:30:17.141097 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:17 crc kubenswrapper[5125]: E1208 19:30:17.241384 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:17 crc kubenswrapper[5125]: E1208 19:30:17.342460 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:17 crc kubenswrapper[5125]: E1208 19:30:17.443242 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:17 crc kubenswrapper[5125]: E1208 19:30:17.543977 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:17 crc kubenswrapper[5125]: E1208 19:30:17.644558 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:17 crc kubenswrapper[5125]: E1208 19:30:17.745752 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:17 crc kubenswrapper[5125]: E1208 19:30:17.846649 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:17 crc kubenswrapper[5125]: E1208 19:30:17.947130 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:18 crc kubenswrapper[5125]: I1208 19:30:18.038107 5125 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 19:30:18 crc kubenswrapper[5125]: E1208 19:30:18.047865 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:18 crc kubenswrapper[5125]: E1208 19:30:18.148334 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:18 crc kubenswrapper[5125]: E1208 19:30:18.249392 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:18 crc kubenswrapper[5125]: E1208 19:30:18.349510 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:18 crc kubenswrapper[5125]: E1208 19:30:18.449717 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:18 crc kubenswrapper[5125]: E1208 19:30:18.550126 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:18 crc kubenswrapper[5125]: E1208 19:30:18.650715 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:18 crc kubenswrapper[5125]: E1208 19:30:18.751065 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:18 crc kubenswrapper[5125]: E1208 19:30:18.851947 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:18 crc kubenswrapper[5125]: E1208 19:30:18.952298 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:19 crc kubenswrapper[5125]: E1208 19:30:19.052852 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:19 crc kubenswrapper[5125]: E1208 19:30:19.154005 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:19 crc kubenswrapper[5125]: E1208 19:30:19.254999 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:19 crc kubenswrapper[5125]: E1208 19:30:19.356160 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:19 crc kubenswrapper[5125]: E1208 19:30:19.456563 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:19 crc kubenswrapper[5125]: E1208 19:30:19.557527 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:19 crc kubenswrapper[5125]: E1208 19:30:19.658238 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:19 crc kubenswrapper[5125]: E1208 19:30:19.758818 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:19 crc kubenswrapper[5125]: E1208 19:30:19.859686 5125 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 19:30:19 crc kubenswrapper[5125]: I1208 19:30:19.928332 5125 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 19:30:19 crc kubenswrapper[5125]: I1208 19:30:19.961703 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:19 crc kubenswrapper[5125]: I1208 19:30:19.961764 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:19 crc kubenswrapper[5125]: I1208 19:30:19.961774 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:19 crc kubenswrapper[5125]: I1208 19:30:19.961792 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:19 crc kubenswrapper[5125]: I1208 19:30:19.961802 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:19Z","lastTransitionTime":"2025-12-08T19:30:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:19 crc kubenswrapper[5125]: I1208 19:30:19.983022 5125 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 19:30:19 crc kubenswrapper[5125]: I1208 19:30:19.994102 5125 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.064023 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.064094 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.064116 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.064153 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.064175 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:20Z","lastTransitionTime":"2025-12-08T19:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.093120 5125 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.166936 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.167052 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.167078 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.167108 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.167132 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:20Z","lastTransitionTime":"2025-12-08T19:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.194708 5125 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.269578 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.269680 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.269703 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.269726 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.269743 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:20Z","lastTransitionTime":"2025-12-08T19:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.294351 5125 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.372643 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.372708 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.372726 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.372751 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.372769 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:20Z","lastTransitionTime":"2025-12-08T19:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.475708 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.475787 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.475807 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.475833 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.475852 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:20Z","lastTransitionTime":"2025-12-08T19:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.577589 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.577674 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.577686 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.577703 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.577715 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:20Z","lastTransitionTime":"2025-12-08T19:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.680146 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.680212 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.680232 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.680255 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.680273 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:20Z","lastTransitionTime":"2025-12-08T19:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.708427 5125 apiserver.go:52] "Watching apiserver" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.719344 5125 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.720123 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-txvvl","openshift-network-operator/iptables-alerter-5jnd7","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx","openshift-image-registry/node-ca-jjj2h","openshift-kube-apiserver/kube-apiserver-crc","openshift-multus/multus-additional-cni-plugins-rjgzs","openshift-network-node-identity/network-node-identity-dgvkt","openshift-ovn-kubernetes/ovnkube-node-k9whn","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-machine-config-operator/machine-config-daemon-slhjr","openshift-multus/multus-9p7g8","openshift-multus/network-metrics-daemon-7lwbz","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-etcd/etcd-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-network-diagnostics/network-check-target-fhkjl"] Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.721635 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.722796 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:20 crc kubenswrapper[5125]: E1208 19:30:20.722935 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.723837 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:20 crc kubenswrapper[5125]: E1208 19:30:20.723927 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.724950 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.725155 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.725588 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.725966 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.726418 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.727013 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:20 crc kubenswrapper[5125]: E1208 19:30:20.727085 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.727186 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.727955 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.728713 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.728912 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.730496 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.730508 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.740008 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jjj2h" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.742375 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.742400 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.742591 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.744389 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.746095 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.746981 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.748665 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.748784 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.748700 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.748886 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.748740 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.749951 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.750157 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.751222 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rjgzs" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.753664 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.754698 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.755764 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.755946 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.756093 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.756132 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.756204 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.756294 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.756294 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.756347 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.756560 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.756809 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.758563 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.759716 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.760739 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.762992 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:30:20 crc kubenswrapper[5125]: E1208 19:30:20.763076 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7lwbz" podUID="9a677937-278d-4989-b196-40d5daba436d" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.765796 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.766629 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-txvvl" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.769686 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.770158 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.770468 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.772321 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.776036 5125 scope.go:117] "RemoveContainer" containerID="346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af" Dec 08 19:30:20 crc kubenswrapper[5125]: E1208 19:30:20.776378 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.777391 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.777666 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.782739 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.782797 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.782816 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.782779 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.782839 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.783068 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:20Z","lastTransitionTime":"2025-12-08T19:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.787264 5125 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.794864 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806072 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjj2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05229a97-6cb6-4842-9ec3-f68831b2daf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdnq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjj2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806159 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806191 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806215 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806236 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806257 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806279 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806298 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806319 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806339 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806364 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806387 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806408 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806429 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806451 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806473 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806495 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806519 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806542 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806562 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806583 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806636 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806669 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806700 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806811 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806837 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806859 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806882 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806904 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806924 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806945 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806966 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806986 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.807007 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.806973 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.807032 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.807058 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.807087 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.807107 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.807130 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.807154 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.807173 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.807206 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.807366 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.807365 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.807497 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.807655 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.807694 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.807873 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.808099 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.808496 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.808235 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.808166 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.808569 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.808653 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.808751 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.808789 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.808864 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.808892 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.809013 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.809030 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.809041 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.809082 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.809090 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.809082 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.809155 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.809226 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.809260 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.809331 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.809410 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.809435 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.809448 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.809523 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.809561 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.809596 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.809655 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.809686 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.809449 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.809482 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: E1208 19:30:20.809715 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:30:21.309682573 +0000 UTC m=+78.080172887 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.812530 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.809883 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.809952 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.812757 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.810172 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.810270 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.810358 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.810542 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.810717 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.810811 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.810893 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.811175 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.811418 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.811531 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.812934 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.813114 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.813155 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.813189 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.813225 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.813260 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.813299 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.813335 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.813377 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.813412 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.813446 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.813481 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.813515 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.813550 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.813586 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.813644 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.813680 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.813715 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.813748 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.813782 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.813816 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.813854 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.813893 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.813927 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.813967 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.814020 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.814073 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.814123 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.814169 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.814216 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.814267 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.814318 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.814369 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.814418 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.814470 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.814517 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.814572 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.814660 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.814713 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.814764 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.814825 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.814886 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.814941 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.814993 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.815049 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.815099 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.815150 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.815206 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.815273 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.815601 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.815712 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.815772 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.815834 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.815886 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.815940 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.816002 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.816063 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.816118 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.818246 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.818289 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.818314 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.818347 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.818369 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.818392 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.818415 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.818441 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.818464 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.818489 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.818511 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.818535 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.818558 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.818580 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.818603 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.818665 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.811584 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.818419 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.811638 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.811662 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.813045 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.813100 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.813594 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.813843 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.814153 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.814404 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.814557 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.814994 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.815016 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.815165 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.815564 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.815650 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.815803 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.815823 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.816177 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.816691 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.816895 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.819690 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.816926 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.817089 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.817284 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.817603 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.817768 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.817898 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.818047 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.818679 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.818962 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.819154 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.819189 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.819204 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.819858 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.819867 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.820024 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.819690 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.820255 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.820301 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.820324 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.820364 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.820452 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.820530 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.820591 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.820577 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.820684 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.820736 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.820765 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.820841 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.820858 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.820896 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.820919 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.820943 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.820968 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.820993 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.821015 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.821037 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.821060 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.821091 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.821114 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.821128 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.821150 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.821180 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.821201 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.821224 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.821246 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.821268 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.821495 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.822014 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.822238 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.822318 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.822394 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.822450 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.822505 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.822566 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.822670 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.822736 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.822791 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.822845 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.822899 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.822952 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.823016 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.823071 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.823221 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.823281 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.823387 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.823454 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.823516 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.823570 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.823674 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.823736 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.823795 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.823853 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.823910 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.823968 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.824024 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.824081 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.824146 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.824207 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.824265 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.824325 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.824399 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.824458 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.824518 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.824572 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.824746 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.824823 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.824950 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.825016 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.825081 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.825142 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.825201 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.825259 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.825319 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.825381 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.825449 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.825526 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.825657 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.825739 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.825811 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.825882 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.825947 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.826011 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.826071 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.826129 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.826188 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.826248 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.827147 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.822252 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.822803 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.823039 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.823077 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.823131 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.823317 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.823441 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.823469 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.823736 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.823805 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.823816 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.823887 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.824119 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.824408 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.824673 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.824704 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.824808 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.824763 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.824912 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.825069 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.825284 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.825350 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.825561 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.825814 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.825835 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.826322 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.826358 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.826713 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.826772 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.827408 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.827400 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.827423 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.827789 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.828045 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.828310 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.828190 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aabf1825-0c19-45de-9f9e-fe94777752e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k9whn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.828649 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.828748 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.828769 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.829138 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.829712 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.829718 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.830109 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.829893 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.830276 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.831536 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.832236 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.832446 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.832501 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.832540 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.832525 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.832750 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.832940 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.833039 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.833271 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.834330 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.834808 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.834820 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.834848 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.834942 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.834979 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.835007 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.835033 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.835062 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.835087 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.835120 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.835147 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.835411 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836177 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-run-systemd\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836228 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d8cea827-b8e3-4d92-adea-df0afd2397da-proxy-tls\") pod \"machine-config-daemon-slhjr\" (UID: \"d8cea827-b8e3-4d92-adea-df0afd2397da\") " pod="openshift-machine-config-operator/machine-config-daemon-slhjr" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836268 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d8cea827-b8e3-4d92-adea-df0afd2397da-mcd-auth-proxy-config\") pod \"machine-config-daemon-slhjr\" (UID: \"d8cea827-b8e3-4d92-adea-df0afd2397da\") " pod="openshift-machine-config-operator/machine-config-daemon-slhjr" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836292 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptppk\" (UniqueName: \"kubernetes.io/projected/afa3059b-1744-4855-ab93-3133529920d5-kube-api-access-ptppk\") pod \"node-resolver-txvvl\" (UID: \"afa3059b-1744-4855-ab93-3133529920d5\") " pod="openshift-dns/node-resolver-txvvl" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836330 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836360 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e25c18b2-98b7-4c40-a059-08f4821dea99-cnibin\") pod \"multus-additional-cni-plugins-rjgzs\" (UID: \"e25c18b2-98b7-4c40-a059-08f4821dea99\") " pod="openshift-multus/multus-additional-cni-plugins-rjgzs" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836391 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e25c18b2-98b7-4c40-a059-08f4821dea99-cni-binary-copy\") pod \"multus-additional-cni-plugins-rjgzs\" (UID: \"e25c18b2-98b7-4c40-a059-08f4821dea99\") " pod="openshift-multus/multus-additional-cni-plugins-rjgzs" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836419 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/48d0e864-6620-4a75-baa4-8653836f3aab-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-w8mbx\" (UID: \"48d0e864-6620-4a75-baa4-8653836f3aab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836441 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-systemd-units\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836460 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/afa3059b-1744-4855-ab93-3133529920d5-hosts-file\") pod \"node-resolver-txvvl\" (UID: \"afa3059b-1744-4855-ab93-3133529920d5\") " pod="openshift-dns/node-resolver-txvvl" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836483 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b938d768-ccce-45a6-a982-3f5d6f1a7d98-cni-binary-copy\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836501 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e25c18b2-98b7-4c40-a059-08f4821dea99-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rjgzs\" (UID: \"e25c18b2-98b7-4c40-a059-08f4821dea99\") " pod="openshift-multus/multus-additional-cni-plugins-rjgzs" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836518 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmsnc\" (UniqueName: \"kubernetes.io/projected/e25c18b2-98b7-4c40-a059-08f4821dea99-kube-api-access-rmsnc\") pod \"multus-additional-cni-plugins-rjgzs\" (UID: \"e25c18b2-98b7-4c40-a059-08f4821dea99\") " pod="openshift-multus/multus-additional-cni-plugins-rjgzs" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836535 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aabf1825-0c19-45de-9f9e-fe94777752e6-ovn-node-metrics-cert\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836552 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836579 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-cni-bin\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836595 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c9bz\" (UniqueName: \"kubernetes.io/projected/d8cea827-b8e3-4d92-adea-df0afd2397da-kube-api-access-4c9bz\") pod \"machine-config-daemon-slhjr\" (UID: \"d8cea827-b8e3-4d92-adea-df0afd2397da\") " pod="openshift-machine-config-operator/machine-config-daemon-slhjr" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836644 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a677937-278d-4989-b196-40d5daba436d-metrics-certs\") pod \"network-metrics-daemon-7lwbz\" (UID: \"9a677937-278d-4989-b196-40d5daba436d\") " pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836674 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-cnibin\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836691 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e25c18b2-98b7-4c40-a059-08f4821dea99-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rjgzs\" (UID: \"e25c18b2-98b7-4c40-a059-08f4821dea99\") " pod="openshift-multus/multus-additional-cni-plugins-rjgzs" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836710 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-run-netns\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836725 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aabf1825-0c19-45de-9f9e-fe94777752e6-env-overrides\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836745 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/afa3059b-1744-4855-ab93-3133529920d5-tmp-dir\") pod \"node-resolver-txvvl\" (UID: \"afa3059b-1744-4855-ab93-3133529920d5\") " pod="openshift-dns/node-resolver-txvvl" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836763 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-host-var-lib-cni-bin\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836779 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-host-var-lib-kubelet\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836797 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-etc-kubernetes\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836814 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twvrb\" (UniqueName: \"kubernetes.io/projected/48d0e864-6620-4a75-baa4-8653836f3aab-kube-api-access-twvrb\") pod \"ovnkube-control-plane-57b78d8988-w8mbx\" (UID: \"48d0e864-6620-4a75-baa4-8653836f3aab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836836 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836855 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e25c18b2-98b7-4c40-a059-08f4821dea99-os-release\") pod \"multus-additional-cni-plugins-rjgzs\" (UID: \"e25c18b2-98b7-4c40-a059-08f4821dea99\") " pod="openshift-multus/multus-additional-cni-plugins-rjgzs" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836871 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-node-log\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836890 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42xvf\" (UniqueName: \"kubernetes.io/projected/aabf1825-0c19-45de-9f9e-fe94777752e6-kube-api-access-42xvf\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836909 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8qzs\" (UniqueName: \"kubernetes.io/projected/9a677937-278d-4989-b196-40d5daba436d-kube-api-access-f8qzs\") pod \"network-metrics-daemon-7lwbz\" (UID: \"9a677937-278d-4989-b196-40d5daba436d\") " pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836928 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/48d0e864-6620-4a75-baa4-8653836f3aab-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-w8mbx\" (UID: \"48d0e864-6620-4a75-baa4-8653836f3aab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836949 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836966 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-var-lib-openvswitch\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836985 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/aabf1825-0c19-45de-9f9e-fe94777752e6-ovnkube-script-lib\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.837008 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-os-release\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.837026 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzwqc\" (UniqueName: \"kubernetes.io/projected/b938d768-ccce-45a6-a982-3f5d6f1a7d98-kube-api-access-nzwqc\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.837048 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.837065 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-system-cni-dir\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.835119 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.835140 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.835620 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.835670 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.837088 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.835736 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836077 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836295 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836583 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836712 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836603 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836739 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.836887 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.837038 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.837053 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.837367 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.837086 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e25c18b2-98b7-4c40-a059-08f4821dea99-system-cni-dir\") pod \"multus-additional-cni-plugins-rjgzs\" (UID: \"e25c18b2-98b7-4c40-a059-08f4821dea99\") " pod="openshift-multus/multus-additional-cni-plugins-rjgzs" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.837694 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.837868 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.837870 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.837985 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.838058 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.838330 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.838351 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.838424 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: E1208 19:30:20.838801 5125 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.838873 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: E1208 19:30:20.838892 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:21.338868605 +0000 UTC m=+78.109358879 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.838902 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.839755 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.839778 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.840056 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.840743 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-run-ovn\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.840825 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.840952 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d8cea827-b8e3-4d92-adea-df0afd2397da-rootfs\") pod \"machine-config-daemon-slhjr\" (UID: \"d8cea827-b8e3-4d92-adea-df0afd2397da\") " pod="openshift-machine-config-operator/machine-config-daemon-slhjr" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.841070 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.841093 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.841147 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-log-socket\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.841292 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aabf1825-0c19-45de-9f9e-fe94777752e6-ovnkube-config\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.841392 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.841393 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.841494 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-multus-socket-dir-parent\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.841500 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.841533 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.841518 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.841586 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.841541 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/e25c18b2-98b7-4c40-a059-08f4821dea99-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-rjgzs\" (UID: \"e25c18b2-98b7-4c40-a059-08f4821dea99\") " pod="openshift-multus/multus-additional-cni-plugins-rjgzs" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.842154 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.842507 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.842564 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-run-openvswitch\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.842585 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.842598 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.842709 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b938d768-ccce-45a6-a982-3f5d6f1a7d98-multus-daemon-config\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.842831 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-host-run-multus-certs\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.842873 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-kubelet\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.842924 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.842972 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.843033 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-multus-cni-dir\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.843071 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-host-run-k8s-cni-cncf-io\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.843078 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.842780 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.843107 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-host-run-netns\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.843191 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/05229a97-6cb6-4842-9ec3-f68831b2daf5-host\") pod \"node-ca-jjj2h\" (UID: \"05229a97-6cb6-4842-9ec3-f68831b2daf5\") " pod="openshift-image-registry/node-ca-jjj2h" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.843240 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-etc-openvswitch\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.843580 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.843643 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.843842 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.844324 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.844502 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.844688 5125 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.844489 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.844788 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.844895 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.844986 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.843347 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-host-var-lib-cni-multus\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.845205 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-multus-conf-dir\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.845245 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/05229a97-6cb6-4842-9ec3-f68831b2daf5-serviceca\") pod \"node-ca-jjj2h\" (UID: \"05229a97-6cb6-4842-9ec3-f68831b2daf5\") " pod="openshift-image-registry/node-ca-jjj2h" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.845279 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdnq7\" (UniqueName: \"kubernetes.io/projected/05229a97-6cb6-4842-9ec3-f68831b2daf5-kube-api-access-jdnq7\") pod \"node-ca-jjj2h\" (UID: \"05229a97-6cb6-4842-9ec3-f68831b2daf5\") " pod="openshift-image-registry/node-ca-jjj2h" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.845313 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-slash\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.845348 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-run-ovn-kubernetes\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.845353 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.845383 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-cni-netd\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.845425 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.845532 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:20 crc kubenswrapper[5125]: E1208 19:30:20.845584 5125 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:20 crc kubenswrapper[5125]: E1208 19:30:20.845678 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:21.345656577 +0000 UTC m=+78.116146861 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.845701 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-hostroot\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.845796 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/48d0e864-6620-4a75-baa4-8653836f3aab-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-w8mbx\" (UID: \"48d0e864-6620-4a75-baa4-8653836f3aab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.845899 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846099 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846121 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846143 5125 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846157 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846165 5125 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846218 5125 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846239 5125 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846255 5125 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846272 5125 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846290 5125 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846306 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846323 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846338 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846354 5125 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846370 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846386 5125 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846402 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846418 5125 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846436 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846453 5125 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846471 5125 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846486 5125 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846502 5125 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846518 5125 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846536 5125 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846553 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846573 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846587 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846603 5125 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846649 5125 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846664 5125 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846682 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846699 5125 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846513 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846715 5125 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846731 5125 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846749 5125 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846765 5125 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846783 5125 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846799 5125 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846816 5125 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846656 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846911 5125 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846930 5125 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846947 5125 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846963 5125 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846978 5125 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846978 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.846993 5125 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847031 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847048 5125 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847065 5125 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847083 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847096 5125 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847107 5125 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847119 5125 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847133 5125 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847146 5125 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847160 5125 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847174 5125 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847186 5125 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847198 5125 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847212 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847225 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847238 5125 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847251 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847264 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847277 5125 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847314 5125 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847330 5125 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847347 5125 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847363 5125 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847380 5125 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847383 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847399 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847416 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847450 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847467 5125 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847484 5125 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847499 5125 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847511 5125 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847524 5125 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847537 5125 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847549 5125 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847561 5125 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847573 5125 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847585 5125 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847598 5125 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847648 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847667 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847679 5125 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847692 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847704 5125 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847716 5125 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847728 5125 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847740 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847753 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847766 5125 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847780 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847792 5125 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847804 5125 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847817 5125 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847829 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847842 5125 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847854 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847866 5125 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847878 5125 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847889 5125 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847900 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847913 5125 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847927 5125 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847939 5125 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847952 5125 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847975 5125 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.847993 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.848008 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.848021 5125 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.848034 5125 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.848046 5125 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.848057 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.848069 5125 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.848081 5125 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.848094 5125 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.848106 5125 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.848119 5125 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.848130 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.848144 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.848156 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.848168 5125 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.848181 5125 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.848194 5125 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.848207 5125 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.848220 5125 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.848234 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.848248 5125 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.848262 5125 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.848273 5125 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.848285 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.848297 5125 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.848309 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.848322 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.848335 5125 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.848348 5125 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.848359 5125 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.848372 5125 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.848559 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.848387 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.849072 5125 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.849088 5125 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.849099 5125 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.849174 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.849192 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.849206 5125 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.849220 5125 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.849232 5125 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.849244 5125 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.849257 5125 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.849270 5125 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.849282 5125 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.849311 5125 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.849324 5125 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.849280 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.849338 5125 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.849351 5125 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.849446 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.850258 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.850345 5125 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.850345 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.850369 5125 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.850408 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.850438 5125 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.850458 5125 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.850475 5125 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.850493 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.850518 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.850535 5125 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.850553 5125 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.850571 5125 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.850588 5125 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.850649 5125 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.850671 5125 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.850691 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.850709 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.850727 5125 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.850747 5125 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.850766 5125 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.850784 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.850804 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.850824 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.850842 5125 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.850860 5125 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.850878 5125 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.850898 5125 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.850915 5125 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.850932 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.851134 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.852042 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.856920 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.856915 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.857217 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.858010 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.858238 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.858388 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.859010 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.859832 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.860118 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.861267 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.861313 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.861318 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.861779 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.861977 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.862221 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.862248 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.862274 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.862362 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: E1208 19:30:20.862380 5125 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:20 crc kubenswrapper[5125]: E1208 19:30:20.862402 5125 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:20 crc kubenswrapper[5125]: E1208 19:30:20.862419 5125 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:20 crc kubenswrapper[5125]: E1208 19:30:20.862505 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:21.362481278 +0000 UTC m=+78.132971552 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.862502 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.862755 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.862890 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.863408 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.865117 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 19:30:20 crc kubenswrapper[5125]: E1208 19:30:20.866539 5125 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:20 crc kubenswrapper[5125]: E1208 19:30:20.866597 5125 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:20 crc kubenswrapper[5125]: E1208 19:30:20.866636 5125 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:20 crc kubenswrapper[5125]: E1208 19:30:20.866696 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:21.366677081 +0000 UTC m=+78.137167365 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.867200 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.868694 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.868942 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.869754 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.875072 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.875379 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.875901 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.881447 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.886053 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.886131 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.886157 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.886186 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.886208 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:20Z","lastTransitionTime":"2025-12-08T19:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.889962 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a65da2-1f6c-4d8c-9235-319e35ed53e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://a5e4699670d62181c1fafae8281271f7dd7e3a3694a21aa85a0431dc61994c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d33cb163457c854b355765916b3c29d258a9b0db805a51c89bd221aba35fb12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8c37e3585615ba4ff1e0e7d348bf306b89181474b72aebe5290f9cf2a9c706d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1208 19:30:12.581927 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 19:30:12.582093 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 19:30:12.582975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1705152817/tls.crt::/tmp/serving-cert-1705152817/tls.key\\\\\\\"\\\\nI1208 19:30:13.192261 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 19:30:13.193899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 19:30:13.193911 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 19:30:13.193933 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 19:30:13.193938 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 19:30:13.196934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 19:30:13.196955 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 19:30:13.196960 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.196966 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.196970 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 19:30:13.196973 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 19:30:13.196975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 19:30:13.196978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 19:30:13.198675 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://be7cc8d52376599fa6e20ccc45f43544f765f5d0ca901360045e14c3441a4c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.893817 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.901428 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.904118 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.922572 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a16dd26-4f2d-422b-a3e7-459ca70d7925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://e9ed6b4f2152ebdc1484f71e24ba072cbf2b01f9d9feba86cfb7389754fdec5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://dffc632ffcdfed24afccbe6a28e61941232e1cd2efcbafd1f092ab148c0c1697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1b8499c0a2bf34333f40c474c394b71a76350a7fc194553cf807f2d5faa889c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd518b12329a228d3ba235314af632769596b1ca8a854f2caf622b9c3847816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8976fcbc73296c5af4cb1d7b4056d864b7d2cae6c8b19dc656ba85a228d2d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.932972 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.944401 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.952283 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-host-var-lib-kubelet\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.952330 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-etc-kubernetes\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.952357 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-twvrb\" (UniqueName: \"kubernetes.io/projected/48d0e864-6620-4a75-baa4-8653836f3aab-kube-api-access-twvrb\") pod \"ovnkube-control-plane-57b78d8988-w8mbx\" (UID: \"48d0e864-6620-4a75-baa4-8653836f3aab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.952376 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-host-var-lib-kubelet\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.952385 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e25c18b2-98b7-4c40-a059-08f4821dea99-os-release\") pod \"multus-additional-cni-plugins-rjgzs\" (UID: \"e25c18b2-98b7-4c40-a059-08f4821dea99\") " pod="openshift-multus/multus-additional-cni-plugins-rjgzs" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.952452 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-node-log\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.952485 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-node-log\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.952492 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-42xvf\" (UniqueName: \"kubernetes.io/projected/aabf1825-0c19-45de-9f9e-fe94777752e6-kube-api-access-42xvf\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.952458 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e25c18b2-98b7-4c40-a059-08f4821dea99-os-release\") pod \"multus-additional-cni-plugins-rjgzs\" (UID: \"e25c18b2-98b7-4c40-a059-08f4821dea99\") " pod="openshift-multus/multus-additional-cni-plugins-rjgzs" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.952525 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f8qzs\" (UniqueName: \"kubernetes.io/projected/9a677937-278d-4989-b196-40d5daba436d-kube-api-access-f8qzs\") pod \"network-metrics-daemon-7lwbz\" (UID: \"9a677937-278d-4989-b196-40d5daba436d\") " pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.952663 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/48d0e864-6620-4a75-baa4-8653836f3aab-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-w8mbx\" (UID: \"48d0e864-6620-4a75-baa4-8653836f3aab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.952517 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-etc-kubernetes\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.952964 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-var-lib-openvswitch\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.953009 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/aabf1825-0c19-45de-9f9e-fe94777752e6-ovnkube-script-lib\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.953043 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-os-release\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.953089 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nzwqc\" (UniqueName: \"kubernetes.io/projected/b938d768-ccce-45a6-a982-3f5d6f1a7d98-kube-api-access-nzwqc\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.953146 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-system-cni-dir\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.953179 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e25c18b2-98b7-4c40-a059-08f4821dea99-system-cni-dir\") pod \"multus-additional-cni-plugins-rjgzs\" (UID: \"e25c18b2-98b7-4c40-a059-08f4821dea99\") " pod="openshift-multus/multus-additional-cni-plugins-rjgzs" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.953210 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-run-ovn\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.953245 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d8cea827-b8e3-4d92-adea-df0afd2397da-rootfs\") pod \"machine-config-daemon-slhjr\" (UID: \"d8cea827-b8e3-4d92-adea-df0afd2397da\") " pod="openshift-machine-config-operator/machine-config-daemon-slhjr" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.953303 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e25c18b2-98b7-4c40-a059-08f4821dea99-system-cni-dir\") pod \"multus-additional-cni-plugins-rjgzs\" (UID: \"e25c18b2-98b7-4c40-a059-08f4821dea99\") " pod="openshift-multus/multus-additional-cni-plugins-rjgzs" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.953305 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-system-cni-dir\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.953103 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-var-lib-openvswitch\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.953389 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-log-socket\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.953351 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-log-socket\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.953395 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-run-ovn\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.953310 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d8cea827-b8e3-4d92-adea-df0afd2397da-rootfs\") pod \"machine-config-daemon-slhjr\" (UID: \"d8cea827-b8e3-4d92-adea-df0afd2397da\") " pod="openshift-machine-config-operator/machine-config-daemon-slhjr" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.953454 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aabf1825-0c19-45de-9f9e-fe94777752e6-ovnkube-config\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.953551 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-multus-socket-dir-parent\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.953594 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/e25c18b2-98b7-4c40-a059-08f4821dea99-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-rjgzs\" (UID: \"e25c18b2-98b7-4c40-a059-08f4821dea99\") " pod="openshift-multus/multus-additional-cni-plugins-rjgzs" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.953678 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-os-release\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.954467 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-multus-socket-dir-parent\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.954530 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-run-openvswitch\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.954567 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.954603 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b938d768-ccce-45a6-a982-3f5d6f1a7d98-multus-daemon-config\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.954637 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aabf1825-0c19-45de-9f9e-fe94777752e6-ovnkube-config\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.954660 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-host-run-multus-certs\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.956071 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-kubelet\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.956766 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-multus-cni-dir\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.957109 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-host-run-k8s-cni-cncf-io\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.957362 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-host-run-netns\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.957693 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/05229a97-6cb6-4842-9ec3-f68831b2daf5-host\") pod \"node-ca-jjj2h\" (UID: \"05229a97-6cb6-4842-9ec3-f68831b2daf5\") " pod="openshift-image-registry/node-ca-jjj2h" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.958042 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-etc-openvswitch\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.958308 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-host-var-lib-cni-multus\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.958455 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-multus-conf-dir\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.958662 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/05229a97-6cb6-4842-9ec3-f68831b2daf5-serviceca\") pod \"node-ca-jjj2h\" (UID: \"05229a97-6cb6-4842-9ec3-f68831b2daf5\") " pod="openshift-image-registry/node-ca-jjj2h" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.958771 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-host-var-lib-cni-multus\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.954971 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/aabf1825-0c19-45de-9f9e-fe94777752e6-ovnkube-script-lib\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.958832 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-etc-openvswitch\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.957290 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-host-run-k8s-cni-cncf-io\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.957803 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-host-run-netns\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.958885 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-multus-conf-dir\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.955110 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-run-openvswitch\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.955798 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/e25c18b2-98b7-4c40-a059-08f4821dea99-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-rjgzs\" (UID: \"e25c18b2-98b7-4c40-a059-08f4821dea99\") " pod="openshift-multus/multus-additional-cni-plugins-rjgzs" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.957836 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/05229a97-6cb6-4842-9ec3-f68831b2daf5-host\") pod \"node-ca-jjj2h\" (UID: \"05229a97-6cb6-4842-9ec3-f68831b2daf5\") " pod="openshift-image-registry/node-ca-jjj2h" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.958746 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/48d0e864-6620-4a75-baa4-8653836f3aab-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-w8mbx\" (UID: \"48d0e864-6620-4a75-baa4-8653836f3aab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.956383 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.956374 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-kubelet\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.955982 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b938d768-ccce-45a6-a982-3f5d6f1a7d98-multus-daemon-config\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.954707 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-host-run-multus-certs\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.959407 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jdnq7\" (UniqueName: \"kubernetes.io/projected/05229a97-6cb6-4842-9ec3-f68831b2daf5-kube-api-access-jdnq7\") pod \"node-ca-jjj2h\" (UID: \"05229a97-6cb6-4842-9ec3-f68831b2daf5\") " pod="openshift-image-registry/node-ca-jjj2h" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.959453 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-slash\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.959478 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-run-ovn-kubernetes\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.959506 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-cni-netd\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.959531 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.959572 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-hostroot\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.959599 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/48d0e864-6620-4a75-baa4-8653836f3aab-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-w8mbx\" (UID: \"48d0e864-6620-4a75-baa4-8653836f3aab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.959644 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-run-systemd\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.959669 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d8cea827-b8e3-4d92-adea-df0afd2397da-proxy-tls\") pod \"machine-config-daemon-slhjr\" (UID: \"d8cea827-b8e3-4d92-adea-df0afd2397da\") " pod="openshift-machine-config-operator/machine-config-daemon-slhjr" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.959693 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d8cea827-b8e3-4d92-adea-df0afd2397da-mcd-auth-proxy-config\") pod \"machine-config-daemon-slhjr\" (UID: \"d8cea827-b8e3-4d92-adea-df0afd2397da\") " pod="openshift-machine-config-operator/machine-config-daemon-slhjr" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.959723 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ptppk\" (UniqueName: \"kubernetes.io/projected/afa3059b-1744-4855-ab93-3133529920d5-kube-api-access-ptppk\") pod \"node-resolver-txvvl\" (UID: \"afa3059b-1744-4855-ab93-3133529920d5\") " pod="openshift-dns/node-resolver-txvvl" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.959764 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e25c18b2-98b7-4c40-a059-08f4821dea99-cnibin\") pod \"multus-additional-cni-plugins-rjgzs\" (UID: \"e25c18b2-98b7-4c40-a059-08f4821dea99\") " pod="openshift-multus/multus-additional-cni-plugins-rjgzs" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.959797 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e25c18b2-98b7-4c40-a059-08f4821dea99-cni-binary-copy\") pod \"multus-additional-cni-plugins-rjgzs\" (UID: \"e25c18b2-98b7-4c40-a059-08f4821dea99\") " pod="openshift-multus/multus-additional-cni-plugins-rjgzs" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.959822 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/48d0e864-6620-4a75-baa4-8653836f3aab-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-w8mbx\" (UID: \"48d0e864-6620-4a75-baa4-8653836f3aab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.959844 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-systemd-units\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.959869 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/afa3059b-1744-4855-ab93-3133529920d5-hosts-file\") pod \"node-resolver-txvvl\" (UID: \"afa3059b-1744-4855-ab93-3133529920d5\") " pod="openshift-dns/node-resolver-txvvl" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.959891 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b938d768-ccce-45a6-a982-3f5d6f1a7d98-cni-binary-copy\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.959918 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e25c18b2-98b7-4c40-a059-08f4821dea99-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rjgzs\" (UID: \"e25c18b2-98b7-4c40-a059-08f4821dea99\") " pod="openshift-multus/multus-additional-cni-plugins-rjgzs" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.959944 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rmsnc\" (UniqueName: \"kubernetes.io/projected/e25c18b2-98b7-4c40-a059-08f4821dea99-kube-api-access-rmsnc\") pod \"multus-additional-cni-plugins-rjgzs\" (UID: \"e25c18b2-98b7-4c40-a059-08f4821dea99\") " pod="openshift-multus/multus-additional-cni-plugins-rjgzs" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.959967 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aabf1825-0c19-45de-9f9e-fe94777752e6-ovn-node-metrics-cert\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.959991 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960020 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-cni-bin\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960044 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4c9bz\" (UniqueName: \"kubernetes.io/projected/d8cea827-b8e3-4d92-adea-df0afd2397da-kube-api-access-4c9bz\") pod \"machine-config-daemon-slhjr\" (UID: \"d8cea827-b8e3-4d92-adea-df0afd2397da\") " pod="openshift-machine-config-operator/machine-config-daemon-slhjr" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960068 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a677937-278d-4989-b196-40d5daba436d-metrics-certs\") pod \"network-metrics-daemon-7lwbz\" (UID: \"9a677937-278d-4989-b196-40d5daba436d\") " pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960094 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-cnibin\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960115 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e25c18b2-98b7-4c40-a059-08f4821dea99-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rjgzs\" (UID: \"e25c18b2-98b7-4c40-a059-08f4821dea99\") " pod="openshift-multus/multus-additional-cni-plugins-rjgzs" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960139 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-run-netns\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960159 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aabf1825-0c19-45de-9f9e-fe94777752e6-env-overrides\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960181 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/afa3059b-1744-4855-ab93-3133529920d5-tmp-dir\") pod \"node-resolver-txvvl\" (UID: \"afa3059b-1744-4855-ab93-3133529920d5\") " pod="openshift-dns/node-resolver-txvvl" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960202 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-host-var-lib-cni-bin\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960313 5125 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960327 5125 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960340 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960357 5125 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960370 5125 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960383 5125 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960396 5125 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960410 5125 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960426 5125 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960440 5125 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960452 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960465 5125 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960481 5125 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960494 5125 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960509 5125 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960522 5125 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960534 5125 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960548 5125 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960560 5125 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960573 5125 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960586 5125 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960598 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960627 5125 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960638 5125 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960650 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960663 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960674 5125 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960686 5125 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960701 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960716 5125 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960728 5125 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960742 5125 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960755 5125 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960768 5125 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960782 5125 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960782 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-systemd-units\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960796 5125 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.957255 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-multus-cni-dir\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960816 5125 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960788 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/05229a97-6cb6-4842-9ec3-f68831b2daf5-serviceca\") pod \"node-ca-jjj2h\" (UID: \"05229a97-6cb6-4842-9ec3-f68831b2daf5\") " pod="openshift-image-registry/node-ca-jjj2h" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960861 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.960906 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-hostroot\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: E1208 19:30:20.960938 5125 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:20 crc kubenswrapper[5125]: E1208 19:30:20.961001 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a677937-278d-4989-b196-40d5daba436d-metrics-certs podName:9a677937-278d-4989-b196-40d5daba436d nodeName:}" failed. No retries permitted until 2025-12-08 19:30:21.460984951 +0000 UTC m=+78.231475225 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9a677937-278d-4989-b196-40d5daba436d-metrics-certs") pod "network-metrics-daemon-7lwbz" (UID: "9a677937-278d-4989-b196-40d5daba436d") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.961061 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-cnibin\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.961159 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e25c18b2-98b7-4c40-a059-08f4821dea99-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rjgzs\" (UID: \"e25c18b2-98b7-4c40-a059-08f4821dea99\") " pod="openshift-multus/multus-additional-cni-plugins-rjgzs" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.961193 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-run-netns\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.961583 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-run-systemd\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.962238 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/afa3059b-1744-4855-ab93-3133529920d5-hosts-file\") pod \"node-resolver-txvvl\" (UID: \"afa3059b-1744-4855-ab93-3133529920d5\") " pod="openshift-dns/node-resolver-txvvl" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.963048 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/48d0e864-6620-4a75-baa4-8653836f3aab-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-w8mbx\" (UID: \"48d0e864-6620-4a75-baa4-8653836f3aab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.963119 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aabf1825-0c19-45de-9f9e-fe94777752e6-env-overrides\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.963174 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b938d768-ccce-45a6-a982-3f5d6f1a7d98-host-var-lib-cni-bin\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.955072 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.963198 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e25c18b2-98b7-4c40-a059-08f4821dea99-cnibin\") pod \"multus-additional-cni-plugins-rjgzs\" (UID: \"e25c18b2-98b7-4c40-a059-08f4821dea99\") " pod="openshift-multus/multus-additional-cni-plugins-rjgzs" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.963837 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b938d768-ccce-45a6-a982-3f5d6f1a7d98-cni-binary-copy\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.964193 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/afa3059b-1744-4855-ab93-3133529920d5-tmp-dir\") pod \"node-resolver-txvvl\" (UID: \"afa3059b-1744-4855-ab93-3133529920d5\") " pod="openshift-dns/node-resolver-txvvl" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.964272 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e25c18b2-98b7-4c40-a059-08f4821dea99-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rjgzs\" (UID: \"e25c18b2-98b7-4c40-a059-08f4821dea99\") " pod="openshift-multus/multus-additional-cni-plugins-rjgzs" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.964362 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e25c18b2-98b7-4c40-a059-08f4821dea99-cni-binary-copy\") pod \"multus-additional-cni-plugins-rjgzs\" (UID: \"e25c18b2-98b7-4c40-a059-08f4821dea99\") " pod="openshift-multus/multus-additional-cni-plugins-rjgzs" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.964399 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.964418 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-cni-bin\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.964455 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-cni-netd\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.964462 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-run-ovn-kubernetes\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.964492 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-slash\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.965383 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d8cea827-b8e3-4d92-adea-df0afd2397da-mcd-auth-proxy-config\") pod \"machine-config-daemon-slhjr\" (UID: \"d8cea827-b8e3-4d92-adea-df0afd2397da\") " pod="openshift-machine-config-operator/machine-config-daemon-slhjr" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.965491 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/48d0e864-6620-4a75-baa4-8653836f3aab-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-w8mbx\" (UID: \"48d0e864-6620-4a75-baa4-8653836f3aab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.966049 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d8cea827-b8e3-4d92-adea-df0afd2397da-proxy-tls\") pod \"machine-config-daemon-slhjr\" (UID: \"d8cea827-b8e3-4d92-adea-df0afd2397da\") " pod="openshift-machine-config-operator/machine-config-daemon-slhjr" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.970035 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aabf1825-0c19-45de-9f9e-fe94777752e6-ovn-node-metrics-cert\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.976171 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-42xvf\" (UniqueName: \"kubernetes.io/projected/aabf1825-0c19-45de-9f9e-fe94777752e6-kube-api-access-42xvf\") pod \"ovnkube-node-k9whn\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.983146 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzwqc\" (UniqueName: \"kubernetes.io/projected/b938d768-ccce-45a6-a982-3f5d6f1a7d98-kube-api-access-nzwqc\") pod \"multus-9p7g8\" (UID: \"b938d768-ccce-45a6-a982-3f5d6f1a7d98\") " pod="openshift-multus/multus-9p7g8" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.991256 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c9bz\" (UniqueName: \"kubernetes.io/projected/d8cea827-b8e3-4d92-adea-df0afd2397da-kube-api-access-4c9bz\") pod \"machine-config-daemon-slhjr\" (UID: \"d8cea827-b8e3-4d92-adea-df0afd2397da\") " pod="openshift-machine-config-operator/machine-config-daemon-slhjr" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.992277 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.992332 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.992393 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.992418 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.992439 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:20Z","lastTransitionTime":"2025-12-08T19:30:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.992946 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-twvrb\" (UniqueName: \"kubernetes.io/projected/48d0e864-6620-4a75-baa4-8653836f3aab-kube-api-access-twvrb\") pod \"ovnkube-control-plane-57b78d8988-w8mbx\" (UID: \"48d0e864-6620-4a75-baa4-8653836f3aab\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.997309 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aabf1825-0c19-45de-9f9e-fe94777752e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k9whn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:20 crc kubenswrapper[5125]: I1208 19:30:20.999094 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdnq7\" (UniqueName: \"kubernetes.io/projected/05229a97-6cb6-4842-9ec3-f68831b2daf5-kube-api-access-jdnq7\") pod \"node-ca-jjj2h\" (UID: \"05229a97-6cb6-4842-9ec3-f68831b2daf5\") " pod="openshift-image-registry/node-ca-jjj2h" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.003060 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptppk\" (UniqueName: \"kubernetes.io/projected/afa3059b-1744-4855-ab93-3133529920d5-kube-api-access-ptppk\") pod \"node-resolver-txvvl\" (UID: \"afa3059b-1744-4855-ab93-3133529920d5\") " pod="openshift-dns/node-resolver-txvvl" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.005124 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8qzs\" (UniqueName: \"kubernetes.io/projected/9a677937-278d-4989-b196-40d5daba436d-kube-api-access-f8qzs\") pod \"network-metrics-daemon-7lwbz\" (UID: \"9a677937-278d-4989-b196-40d5daba436d\") " pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.007807 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmsnc\" (UniqueName: \"kubernetes.io/projected/e25c18b2-98b7-4c40-a059-08f4821dea99-kube-api-access-rmsnc\") pod \"multus-additional-cni-plugins-rjgzs\" (UID: \"e25c18b2-98b7-4c40-a059-08f4821dea99\") " pod="openshift-multus/multus-additional-cni-plugins-rjgzs" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.019963 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8cea827-b8e3-4d92-adea-df0afd2397da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c9bz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c9bz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slhjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.033923 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.038869 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9p7g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b938d768-ccce-45a6-a982-3f5d6f1a7d98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzwqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p7g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: W1208 19:30:21.045765 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34177974_8d82_49d2_a763_391d0df3bbd8.slice/crio-40bd5d04c43a4a88929bf61100534de641f72e615956d3fce1b1e2dc9e5f2034 WatchSource:0}: Error finding container 40bd5d04c43a4a88929bf61100534de641f72e615956d3fce1b1e2dc9e5f2034: Status 404 returned error can't find the container with id 40bd5d04c43a4a88929bf61100534de641f72e615956d3fce1b1e2dc9e5f2034 Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.046622 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.047494 5125 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:21 crc kubenswrapper[5125]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Dec 08 19:30:21 crc kubenswrapper[5125]: set -o allexport Dec 08 19:30:21 crc kubenswrapper[5125]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 08 19:30:21 crc kubenswrapper[5125]: source /etc/kubernetes/apiserver-url.env Dec 08 19:30:21 crc kubenswrapper[5125]: else Dec 08 19:30:21 crc kubenswrapper[5125]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 08 19:30:21 crc kubenswrapper[5125]: exit 1 Dec 08 19:30:21 crc kubenswrapper[5125]: fi Dec 08 19:30:21 crc kubenswrapper[5125]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 08 19:30:21 crc kubenswrapper[5125]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:21 crc kubenswrapper[5125]: > logger="UnhandledError" Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.048663 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.050766 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2309c211-00a6-48e5-b99d-349b71a11862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caca8af5e19887a7e6708058ea051494b18a37f74e2c31cc984ee9e38f34a397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: W1208 19:30:21.060500 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4541ce_7789_4670_bc75_5c2868e52ce0.slice/crio-3821d9c69b8cf4e36407de80aa59e65577f4c20684a08b3b54376954965c6f0a WatchSource:0}: Error finding container 3821d9c69b8cf4e36407de80aa59e65577f4c20684a08b3b54376954965c6f0a: Status 404 returned error can't find the container with id 3821d9c69b8cf4e36407de80aa59e65577f4c20684a08b3b54376954965c6f0a Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.062135 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.065073 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"40bd5d04c43a4a88929bf61100534de641f72e615956d3fce1b1e2dc9e5f2034"} Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.065349 5125 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:21 crc kubenswrapper[5125]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 19:30:21 crc kubenswrapper[5125]: if [[ -f "/env/_master" ]]; then Dec 08 19:30:21 crc kubenswrapper[5125]: set -o allexport Dec 08 19:30:21 crc kubenswrapper[5125]: source "/env/_master" Dec 08 19:30:21 crc kubenswrapper[5125]: set +o allexport Dec 08 19:30:21 crc kubenswrapper[5125]: fi Dec 08 19:30:21 crc kubenswrapper[5125]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 08 19:30:21 crc kubenswrapper[5125]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 08 19:30:21 crc kubenswrapper[5125]: ho_enable="--enable-hybrid-overlay" Dec 08 19:30:21 crc kubenswrapper[5125]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 08 19:30:21 crc kubenswrapper[5125]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 08 19:30:21 crc kubenswrapper[5125]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 08 19:30:21 crc kubenswrapper[5125]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 08 19:30:21 crc kubenswrapper[5125]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 08 19:30:21 crc kubenswrapper[5125]: --webhook-host=127.0.0.1 \ Dec 08 19:30:21 crc kubenswrapper[5125]: --webhook-port=9743 \ Dec 08 19:30:21 crc kubenswrapper[5125]: ${ho_enable} \ Dec 08 19:30:21 crc kubenswrapper[5125]: --enable-interconnect \ Dec 08 19:30:21 crc kubenswrapper[5125]: --disable-approver \ Dec 08 19:30:21 crc kubenswrapper[5125]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 08 19:30:21 crc kubenswrapper[5125]: --wait-for-kubernetes-api=200s \ Dec 08 19:30:21 crc kubenswrapper[5125]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 08 19:30:21 crc kubenswrapper[5125]: --loglevel="${LOGLEVEL}" Dec 08 19:30:21 crc kubenswrapper[5125]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:21 crc kubenswrapper[5125]: > logger="UnhandledError" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.066838 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7be318f-1e5a-4c9b-aff6-a0d7423fb520\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://51dd4ebaac488ab269d08cb3c6bd1ab70695582228b86f0ee98bcf2efe730911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d1a6ee7cc39cbce21b5d44e71db4af1388154261b0f4e46bf80a1c6aace1d18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6be3cefe94889f1e79893ae2e0cbc2c0e19b158c8b5d1fc78c2396198cdf1b63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b524051750cb775841e22d8cd5239926fb9dbb19325e7c8e9d0593caeab1da19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.068835 5125 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:21 crc kubenswrapper[5125]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Dec 08 19:30:21 crc kubenswrapper[5125]: set -o allexport Dec 08 19:30:21 crc kubenswrapper[5125]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 08 19:30:21 crc kubenswrapper[5125]: source /etc/kubernetes/apiserver-url.env Dec 08 19:30:21 crc kubenswrapper[5125]: else Dec 08 19:30:21 crc kubenswrapper[5125]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 08 19:30:21 crc kubenswrapper[5125]: exit 1 Dec 08 19:30:21 crc kubenswrapper[5125]: fi Dec 08 19:30:21 crc kubenswrapper[5125]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 08 19:30:21 crc kubenswrapper[5125]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:21 crc kubenswrapper[5125]: > logger="UnhandledError" Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.070187 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.070424 5125 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:21 crc kubenswrapper[5125]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 19:30:21 crc kubenswrapper[5125]: if [[ -f "/env/_master" ]]; then Dec 08 19:30:21 crc kubenswrapper[5125]: set -o allexport Dec 08 19:30:21 crc kubenswrapper[5125]: source "/env/_master" Dec 08 19:30:21 crc kubenswrapper[5125]: set +o allexport Dec 08 19:30:21 crc kubenswrapper[5125]: fi Dec 08 19:30:21 crc kubenswrapper[5125]: Dec 08 19:30:21 crc kubenswrapper[5125]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 08 19:30:21 crc kubenswrapper[5125]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 08 19:30:21 crc kubenswrapper[5125]: --disable-webhook \ Dec 08 19:30:21 crc kubenswrapper[5125]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 08 19:30:21 crc kubenswrapper[5125]: --loglevel="${LOGLEVEL}" Dec 08 19:30:21 crc kubenswrapper[5125]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:21 crc kubenswrapper[5125]: > logger="UnhandledError" Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.071748 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Dec 08 19:30:21 crc kubenswrapper[5125]: W1208 19:30:21.074145 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-916c7f3924842bd52ada7f6a194d107abdb621023a2c95f4d0a892706b36c166 WatchSource:0}: Error finding container 916c7f3924842bd52ada7f6a194d107abdb621023a2c95f4d0a892706b36c166: Status 404 returned error can't find the container with id 916c7f3924842bd52ada7f6a194d107abdb621023a2c95f4d0a892706b36c166 Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.075970 5125 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.077730 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.077934 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-jjj2h" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.079414 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.084780 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:21 crc kubenswrapper[5125]: W1208 19:30:21.086959 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05229a97_6cb6_4842_9ec3_f68831b2daf5.slice/crio-9bfc98fddaaaa7c99982a1333b370b3e495263b100432bba326ede48847a0f41 WatchSource:0}: Error finding container 9bfc98fddaaaa7c99982a1333b370b3e495263b100432bba326ede48847a0f41: Status 404 returned error can't find the container with id 9bfc98fddaaaa7c99982a1333b370b3e495263b100432bba326ede48847a0f41 Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.089910 5125 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:21 crc kubenswrapper[5125]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 08 19:30:21 crc kubenswrapper[5125]: while [ true ]; Dec 08 19:30:21 crc kubenswrapper[5125]: do Dec 08 19:30:21 crc kubenswrapper[5125]: for f in $(ls /tmp/serviceca); do Dec 08 19:30:21 crc kubenswrapper[5125]: echo $f Dec 08 19:30:21 crc kubenswrapper[5125]: ca_file_path="/tmp/serviceca/${f}" Dec 08 19:30:21 crc kubenswrapper[5125]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 08 19:30:21 crc kubenswrapper[5125]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 08 19:30:21 crc kubenswrapper[5125]: if [ -e "${reg_dir_path}" ]; then Dec 08 19:30:21 crc kubenswrapper[5125]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 08 19:30:21 crc kubenswrapper[5125]: else Dec 08 19:30:21 crc kubenswrapper[5125]: mkdir $reg_dir_path Dec 08 19:30:21 crc kubenswrapper[5125]: cp $ca_file_path $reg_dir_path/ca.crt Dec 08 19:30:21 crc kubenswrapper[5125]: fi Dec 08 19:30:21 crc kubenswrapper[5125]: done Dec 08 19:30:21 crc kubenswrapper[5125]: for d in $(ls /etc/docker/certs.d); do Dec 08 19:30:21 crc kubenswrapper[5125]: echo $d Dec 08 19:30:21 crc kubenswrapper[5125]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 08 19:30:21 crc kubenswrapper[5125]: reg_conf_path="/tmp/serviceca/${dp}" Dec 08 19:30:21 crc kubenswrapper[5125]: if [ ! -e "${reg_conf_path}" ]; then Dec 08 19:30:21 crc kubenswrapper[5125]: rm -rf /etc/docker/certs.d/$d Dec 08 19:30:21 crc kubenswrapper[5125]: fi Dec 08 19:30:21 crc kubenswrapper[5125]: done Dec 08 19:30:21 crc kubenswrapper[5125]: sleep 60 & wait ${!} Dec 08 19:30:21 crc kubenswrapper[5125]: done Dec 08 19:30:21 crc kubenswrapper[5125]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jdnq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-jjj2h_openshift-image-registry(05229a97-6cb6-4842-9ec3-f68831b2daf5): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:21 crc kubenswrapper[5125]: > logger="UnhandledError" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.090637 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.091271 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-jjj2h" podUID="05229a97-6cb6-4842-9ec3-f68831b2daf5" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.092443 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rjgzs" Dec 08 19:30:21 crc kubenswrapper[5125]: W1208 19:30:21.094336 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaabf1825_0c19_45de_9f9e_fe94777752e6.slice/crio-16a138870cb1cb6faefb39f54dd2ff08c6cb551426f96e4cbb951d7d47850407 WatchSource:0}: Error finding container 16a138870cb1cb6faefb39f54dd2ff08c6cb551426f96e4cbb951d7d47850407: Status 404 returned error can't find the container with id 16a138870cb1cb6faefb39f54dd2ff08c6cb551426f96e4cbb951d7d47850407 Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.094653 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.094697 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.094707 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.094723 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.094734 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:21Z","lastTransitionTime":"2025-12-08T19:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.097944 5125 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:21 crc kubenswrapper[5125]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 08 19:30:21 crc kubenswrapper[5125]: apiVersion: v1 Dec 08 19:30:21 crc kubenswrapper[5125]: clusters: Dec 08 19:30:21 crc kubenswrapper[5125]: - cluster: Dec 08 19:30:21 crc kubenswrapper[5125]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 08 19:30:21 crc kubenswrapper[5125]: server: https://api-int.crc.testing:6443 Dec 08 19:30:21 crc kubenswrapper[5125]: name: default-cluster Dec 08 19:30:21 crc kubenswrapper[5125]: contexts: Dec 08 19:30:21 crc kubenswrapper[5125]: - context: Dec 08 19:30:21 crc kubenswrapper[5125]: cluster: default-cluster Dec 08 19:30:21 crc kubenswrapper[5125]: namespace: default Dec 08 19:30:21 crc kubenswrapper[5125]: user: default-auth Dec 08 19:30:21 crc kubenswrapper[5125]: name: default-context Dec 08 19:30:21 crc kubenswrapper[5125]: current-context: default-context Dec 08 19:30:21 crc kubenswrapper[5125]: kind: Config Dec 08 19:30:21 crc kubenswrapper[5125]: preferences: {} Dec 08 19:30:21 crc kubenswrapper[5125]: users: Dec 08 19:30:21 crc kubenswrapper[5125]: - name: default-auth Dec 08 19:30:21 crc kubenswrapper[5125]: user: Dec 08 19:30:21 crc kubenswrapper[5125]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 19:30:21 crc kubenswrapper[5125]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 19:30:21 crc kubenswrapper[5125]: EOF Dec 08 19:30:21 crc kubenswrapper[5125]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-42xvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-k9whn_openshift-ovn-kubernetes(aabf1825-0c19-45de-9f9e-fe94777752e6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:21 crc kubenswrapper[5125]: > logger="UnhandledError" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.098321 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7lwbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a677937-278d-4989-b196-40d5daba436d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8qzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8qzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7lwbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.099413 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.100346 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" Dec 08 19:30:21 crc kubenswrapper[5125]: W1208 19:30:21.103238 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode25c18b2_98b7_4c40_a059_08f4821dea99.slice/crio-64de6debf75a81c0c5b9e824d7ffc85b1ce5a02e7577581f135c57f2d213aadc WatchSource:0}: Error finding container 64de6debf75a81c0c5b9e824d7ffc85b1ce5a02e7577581f135c57f2d213aadc: Status 404 returned error can't find the container with id 64de6debf75a81c0c5b9e824d7ffc85b1ce5a02e7577581f135c57f2d213aadc Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.106936 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9p7g8" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.108653 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d0e864-6620-4a75-baa4-8653836f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-w8mbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.108869 5125 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rmsnc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-rjgzs_openshift-multus(e25c18b2-98b7-4c40-a059-08f4821dea99): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.110112 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-rjgzs" podUID="e25c18b2-98b7-4c40-a059-08f4821dea99" Dec 08 19:30:21 crc kubenswrapper[5125]: W1208 19:30:21.116283 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8cea827_b8e3_4d92_adea_df0afd2397da.slice/crio-b48e14fd759766338880a929aff25ced2e8b714e099940b34ce012a20f2013c3 WatchSource:0}: Error finding container b48e14fd759766338880a929aff25ced2e8b714e099940b34ce012a20f2013c3: Status 404 returned error can't find the container with id b48e14fd759766338880a929aff25ced2e8b714e099940b34ce012a20f2013c3 Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.118084 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-txvvl" Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.120193 5125 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4c9bz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-slhjr_openshift-machine-config-operator(d8cea827-b8e3-4d92-adea-df0afd2397da): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 19:30:21 crc kubenswrapper[5125]: W1208 19:30:21.120442 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb938d768_ccce_45a6_a982_3f5d6f1a7d98.slice/crio-888363a45ff23dde6aa1abeb94c08396d5d6d929b89046912465b1ccc22ca7d7 WatchSource:0}: Error finding container 888363a45ff23dde6aa1abeb94c08396d5d6d929b89046912465b1ccc22ca7d7: Status 404 returned error can't find the container with id 888363a45ff23dde6aa1abeb94c08396d5d6d929b89046912465b1ccc22ca7d7 Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.123106 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd8c208-b235-420d-aa03-61fb487f40bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45dfdf1c59b5fb6c4c2329c90a050ab925412e0e70f48b865bbd4261ba6cf841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://df8ae2ed1ee6f83e167f23dd7edc5eaf5e881de6ea7d042f3d4184090b0cf6be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb9c33205053ee254860f931fb8051f331e26827a53bee03ec0451ad1c36124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.123931 5125 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4c9bz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-slhjr_openshift-machine-config-operator(d8cea827-b8e3-4d92-adea-df0afd2397da): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.124365 5125 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:21 crc kubenswrapper[5125]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 08 19:30:21 crc kubenswrapper[5125]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 08 19:30:21 crc kubenswrapper[5125]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nzwqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-9p7g8_openshift-multus(b938d768-ccce-45a6-a982-3f5d6f1a7d98): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:21 crc kubenswrapper[5125]: > logger="UnhandledError" Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.125215 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" podUID="d8cea827-b8e3-4d92-adea-df0afd2397da" Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.125734 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-9p7g8" podUID="b938d768-ccce-45a6-a982-3f5d6f1a7d98" Dec 08 19:30:21 crc kubenswrapper[5125]: W1208 19:30:21.131461 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafa3059b_1744_4855_ab93_3133529920d5.slice/crio-2908e4786a35dd50430dc1186278f5de10695556b76b31764b0ebd2c9a21a872 WatchSource:0}: Error finding container 2908e4786a35dd50430dc1186278f5de10695556b76b31764b0ebd2c9a21a872: Status 404 returned error can't find the container with id 2908e4786a35dd50430dc1186278f5de10695556b76b31764b0ebd2c9a21a872 Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.133282 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjj2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05229a97-6cb6-4842-9ec3-f68831b2daf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdnq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjj2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.134113 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.134576 5125 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:21 crc kubenswrapper[5125]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Dec 08 19:30:21 crc kubenswrapper[5125]: set -uo pipefail Dec 08 19:30:21 crc kubenswrapper[5125]: Dec 08 19:30:21 crc kubenswrapper[5125]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 08 19:30:21 crc kubenswrapper[5125]: Dec 08 19:30:21 crc kubenswrapper[5125]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 08 19:30:21 crc kubenswrapper[5125]: HOSTS_FILE="/etc/hosts" Dec 08 19:30:21 crc kubenswrapper[5125]: TEMP_FILE="/tmp/hosts.tmp" Dec 08 19:30:21 crc kubenswrapper[5125]: Dec 08 19:30:21 crc kubenswrapper[5125]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 08 19:30:21 crc kubenswrapper[5125]: Dec 08 19:30:21 crc kubenswrapper[5125]: # Make a temporary file with the old hosts file's attributes. Dec 08 19:30:21 crc kubenswrapper[5125]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 08 19:30:21 crc kubenswrapper[5125]: echo "Failed to preserve hosts file. Exiting." Dec 08 19:30:21 crc kubenswrapper[5125]: exit 1 Dec 08 19:30:21 crc kubenswrapper[5125]: fi Dec 08 19:30:21 crc kubenswrapper[5125]: Dec 08 19:30:21 crc kubenswrapper[5125]: while true; do Dec 08 19:30:21 crc kubenswrapper[5125]: declare -A svc_ips Dec 08 19:30:21 crc kubenswrapper[5125]: for svc in "${services[@]}"; do Dec 08 19:30:21 crc kubenswrapper[5125]: # Fetch service IP from cluster dns if present. We make several tries Dec 08 19:30:21 crc kubenswrapper[5125]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 08 19:30:21 crc kubenswrapper[5125]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 08 19:30:21 crc kubenswrapper[5125]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 08 19:30:21 crc kubenswrapper[5125]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 19:30:21 crc kubenswrapper[5125]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 19:30:21 crc kubenswrapper[5125]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 19:30:21 crc kubenswrapper[5125]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 08 19:30:21 crc kubenswrapper[5125]: for i in ${!cmds[*]} Dec 08 19:30:21 crc kubenswrapper[5125]: do Dec 08 19:30:21 crc kubenswrapper[5125]: ips=($(eval "${cmds[i]}")) Dec 08 19:30:21 crc kubenswrapper[5125]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 08 19:30:21 crc kubenswrapper[5125]: svc_ips["${svc}"]="${ips[@]}" Dec 08 19:30:21 crc kubenswrapper[5125]: break Dec 08 19:30:21 crc kubenswrapper[5125]: fi Dec 08 19:30:21 crc kubenswrapper[5125]: done Dec 08 19:30:21 crc kubenswrapper[5125]: done Dec 08 19:30:21 crc kubenswrapper[5125]: Dec 08 19:30:21 crc kubenswrapper[5125]: # Update /etc/hosts only if we get valid service IPs Dec 08 19:30:21 crc kubenswrapper[5125]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 08 19:30:21 crc kubenswrapper[5125]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 08 19:30:21 crc kubenswrapper[5125]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 08 19:30:21 crc kubenswrapper[5125]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 08 19:30:21 crc kubenswrapper[5125]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 08 19:30:21 crc kubenswrapper[5125]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 08 19:30:21 crc kubenswrapper[5125]: sleep 60 & wait Dec 08 19:30:21 crc kubenswrapper[5125]: continue Dec 08 19:30:21 crc kubenswrapper[5125]: fi Dec 08 19:30:21 crc kubenswrapper[5125]: Dec 08 19:30:21 crc kubenswrapper[5125]: # Append resolver entries for services Dec 08 19:30:21 crc kubenswrapper[5125]: rc=0 Dec 08 19:30:21 crc kubenswrapper[5125]: for svc in "${!svc_ips[@]}"; do Dec 08 19:30:21 crc kubenswrapper[5125]: for ip in ${svc_ips[${svc}]}; do Dec 08 19:30:21 crc kubenswrapper[5125]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 08 19:30:21 crc kubenswrapper[5125]: done Dec 08 19:30:21 crc kubenswrapper[5125]: done Dec 08 19:30:21 crc kubenswrapper[5125]: if [[ $rc -ne 0 ]]; then Dec 08 19:30:21 crc kubenswrapper[5125]: sleep 60 & wait Dec 08 19:30:21 crc kubenswrapper[5125]: continue Dec 08 19:30:21 crc kubenswrapper[5125]: fi Dec 08 19:30:21 crc kubenswrapper[5125]: Dec 08 19:30:21 crc kubenswrapper[5125]: Dec 08 19:30:21 crc kubenswrapper[5125]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 08 19:30:21 crc kubenswrapper[5125]: # Replace /etc/hosts with our modified version if needed Dec 08 19:30:21 crc kubenswrapper[5125]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 08 19:30:21 crc kubenswrapper[5125]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 08 19:30:21 crc kubenswrapper[5125]: fi Dec 08 19:30:21 crc kubenswrapper[5125]: sleep 60 & wait Dec 08 19:30:21 crc kubenswrapper[5125]: unset svc_ips Dec 08 19:30:21 crc kubenswrapper[5125]: done Dec 08 19:30:21 crc kubenswrapper[5125]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ptppk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-txvvl_openshift-dns(afa3059b-1744-4855-ab93-3133529920d5): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:21 crc kubenswrapper[5125]: > logger="UnhandledError" Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.136289 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-txvvl" podUID="afa3059b-1744-4855-ab93-3133529920d5" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.145745 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjgzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25c18b2-98b7-4c40-a059-08f4821dea99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjgzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: W1208 19:30:21.146309 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48d0e864_6620_4a75_baa4_8653836f3aab.slice/crio-6c72e721e2a8d7fcc34cc083b0dbe02e8e032b636028e0a263c07f2463f10d25 WatchSource:0}: Error finding container 6c72e721e2a8d7fcc34cc083b0dbe02e8e032b636028e0a263c07f2463f10d25: Status 404 returned error can't find the container with id 6c72e721e2a8d7fcc34cc083b0dbe02e8e032b636028e0a263c07f2463f10d25 Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.148749 5125 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:21 crc kubenswrapper[5125]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Dec 08 19:30:21 crc kubenswrapper[5125]: set -euo pipefail Dec 08 19:30:21 crc kubenswrapper[5125]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Dec 08 19:30:21 crc kubenswrapper[5125]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Dec 08 19:30:21 crc kubenswrapper[5125]: # As the secret mount is optional we must wait for the files to be present. Dec 08 19:30:21 crc kubenswrapper[5125]: # The service is created in monitor.yaml and this is created in sdn.yaml. Dec 08 19:30:21 crc kubenswrapper[5125]: TS=$(date +%s) Dec 08 19:30:21 crc kubenswrapper[5125]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Dec 08 19:30:21 crc kubenswrapper[5125]: HAS_LOGGED_INFO=0 Dec 08 19:30:21 crc kubenswrapper[5125]: Dec 08 19:30:21 crc kubenswrapper[5125]: log_missing_certs(){ Dec 08 19:30:21 crc kubenswrapper[5125]: CUR_TS=$(date +%s) Dec 08 19:30:21 crc kubenswrapper[5125]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Dec 08 19:30:21 crc kubenswrapper[5125]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Dec 08 19:30:21 crc kubenswrapper[5125]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Dec 08 19:30:21 crc kubenswrapper[5125]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Dec 08 19:30:21 crc kubenswrapper[5125]: HAS_LOGGED_INFO=1 Dec 08 19:30:21 crc kubenswrapper[5125]: fi Dec 08 19:30:21 crc kubenswrapper[5125]: } Dec 08 19:30:21 crc kubenswrapper[5125]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Dec 08 19:30:21 crc kubenswrapper[5125]: log_missing_certs Dec 08 19:30:21 crc kubenswrapper[5125]: sleep 5 Dec 08 19:30:21 crc kubenswrapper[5125]: done Dec 08 19:30:21 crc kubenswrapper[5125]: Dec 08 19:30:21 crc kubenswrapper[5125]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Dec 08 19:30:21 crc kubenswrapper[5125]: exec /usr/bin/kube-rbac-proxy \ Dec 08 19:30:21 crc kubenswrapper[5125]: --logtostderr \ Dec 08 19:30:21 crc kubenswrapper[5125]: --secure-listen-address=:9108 \ Dec 08 19:30:21 crc kubenswrapper[5125]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Dec 08 19:30:21 crc kubenswrapper[5125]: --upstream=http://127.0.0.1:29108/ \ Dec 08 19:30:21 crc kubenswrapper[5125]: --tls-private-key-file=${TLS_PK} \ Dec 08 19:30:21 crc kubenswrapper[5125]: --tls-cert-file=${TLS_CERT} Dec 08 19:30:21 crc kubenswrapper[5125]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-twvrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-w8mbx_openshift-ovn-kubernetes(48d0e864-6620-4a75-baa4-8653836f3aab): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:21 crc kubenswrapper[5125]: > logger="UnhandledError" Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.150937 5125 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:21 crc kubenswrapper[5125]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 19:30:21 crc kubenswrapper[5125]: if [[ -f "/env/_master" ]]; then Dec 08 19:30:21 crc kubenswrapper[5125]: set -o allexport Dec 08 19:30:21 crc kubenswrapper[5125]: source "/env/_master" Dec 08 19:30:21 crc kubenswrapper[5125]: set +o allexport Dec 08 19:30:21 crc kubenswrapper[5125]: fi Dec 08 19:30:21 crc kubenswrapper[5125]: Dec 08 19:30:21 crc kubenswrapper[5125]: ovn_v4_join_subnet_opt= Dec 08 19:30:21 crc kubenswrapper[5125]: if [[ "" != "" ]]; then Dec 08 19:30:21 crc kubenswrapper[5125]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Dec 08 19:30:21 crc kubenswrapper[5125]: fi Dec 08 19:30:21 crc kubenswrapper[5125]: ovn_v6_join_subnet_opt= Dec 08 19:30:21 crc kubenswrapper[5125]: if [[ "" != "" ]]; then Dec 08 19:30:21 crc kubenswrapper[5125]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Dec 08 19:30:21 crc kubenswrapper[5125]: fi Dec 08 19:30:21 crc kubenswrapper[5125]: Dec 08 19:30:21 crc kubenswrapper[5125]: ovn_v4_transit_switch_subnet_opt= Dec 08 19:30:21 crc kubenswrapper[5125]: if [[ "" != "" ]]; then Dec 08 19:30:21 crc kubenswrapper[5125]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Dec 08 19:30:21 crc kubenswrapper[5125]: fi Dec 08 19:30:21 crc kubenswrapper[5125]: ovn_v6_transit_switch_subnet_opt= Dec 08 19:30:21 crc kubenswrapper[5125]: if [[ "" != "" ]]; then Dec 08 19:30:21 crc kubenswrapper[5125]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Dec 08 19:30:21 crc kubenswrapper[5125]: fi Dec 08 19:30:21 crc kubenswrapper[5125]: Dec 08 19:30:21 crc kubenswrapper[5125]: dns_name_resolver_enabled_flag= Dec 08 19:30:21 crc kubenswrapper[5125]: if [[ "false" == "true" ]]; then Dec 08 19:30:21 crc kubenswrapper[5125]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Dec 08 19:30:21 crc kubenswrapper[5125]: fi Dec 08 19:30:21 crc kubenswrapper[5125]: Dec 08 19:30:21 crc kubenswrapper[5125]: persistent_ips_enabled_flag="--enable-persistent-ips" Dec 08 19:30:21 crc kubenswrapper[5125]: Dec 08 19:30:21 crc kubenswrapper[5125]: # This is needed so that converting clusters from GA to TP Dec 08 19:30:21 crc kubenswrapper[5125]: # will rollout control plane pods as well Dec 08 19:30:21 crc kubenswrapper[5125]: network_segmentation_enabled_flag= Dec 08 19:30:21 crc kubenswrapper[5125]: multi_network_enabled_flag= Dec 08 19:30:21 crc kubenswrapper[5125]: if [[ "true" == "true" ]]; then Dec 08 19:30:21 crc kubenswrapper[5125]: multi_network_enabled_flag="--enable-multi-network" Dec 08 19:30:21 crc kubenswrapper[5125]: fi Dec 08 19:30:21 crc kubenswrapper[5125]: if [[ "true" == "true" ]]; then Dec 08 19:30:21 crc kubenswrapper[5125]: if [[ "true" != "true" ]]; then Dec 08 19:30:21 crc kubenswrapper[5125]: multi_network_enabled_flag="--enable-multi-network" Dec 08 19:30:21 crc kubenswrapper[5125]: fi Dec 08 19:30:21 crc kubenswrapper[5125]: network_segmentation_enabled_flag="--enable-network-segmentation" Dec 08 19:30:21 crc kubenswrapper[5125]: fi Dec 08 19:30:21 crc kubenswrapper[5125]: Dec 08 19:30:21 crc kubenswrapper[5125]: route_advertisements_enable_flag= Dec 08 19:30:21 crc kubenswrapper[5125]: if [[ "false" == "true" ]]; then Dec 08 19:30:21 crc kubenswrapper[5125]: route_advertisements_enable_flag="--enable-route-advertisements" Dec 08 19:30:21 crc kubenswrapper[5125]: fi Dec 08 19:30:21 crc kubenswrapper[5125]: Dec 08 19:30:21 crc kubenswrapper[5125]: preconfigured_udn_addresses_enable_flag= Dec 08 19:30:21 crc kubenswrapper[5125]: if [[ "false" == "true" ]]; then Dec 08 19:30:21 crc kubenswrapper[5125]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Dec 08 19:30:21 crc kubenswrapper[5125]: fi Dec 08 19:30:21 crc kubenswrapper[5125]: Dec 08 19:30:21 crc kubenswrapper[5125]: # Enable multi-network policy if configured (control-plane always full mode) Dec 08 19:30:21 crc kubenswrapper[5125]: multi_network_policy_enabled_flag= Dec 08 19:30:21 crc kubenswrapper[5125]: if [[ "false" == "true" ]]; then Dec 08 19:30:21 crc kubenswrapper[5125]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Dec 08 19:30:21 crc kubenswrapper[5125]: fi Dec 08 19:30:21 crc kubenswrapper[5125]: Dec 08 19:30:21 crc kubenswrapper[5125]: # Enable admin network policy if configured (control-plane always full mode) Dec 08 19:30:21 crc kubenswrapper[5125]: admin_network_policy_enabled_flag= Dec 08 19:30:21 crc kubenswrapper[5125]: if [[ "true" == "true" ]]; then Dec 08 19:30:21 crc kubenswrapper[5125]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Dec 08 19:30:21 crc kubenswrapper[5125]: fi Dec 08 19:30:21 crc kubenswrapper[5125]: Dec 08 19:30:21 crc kubenswrapper[5125]: if [ "shared" == "shared" ]; then Dec 08 19:30:21 crc kubenswrapper[5125]: gateway_mode_flags="--gateway-mode shared" Dec 08 19:30:21 crc kubenswrapper[5125]: elif [ "shared" == "local" ]; then Dec 08 19:30:21 crc kubenswrapper[5125]: gateway_mode_flags="--gateway-mode local" Dec 08 19:30:21 crc kubenswrapper[5125]: else Dec 08 19:30:21 crc kubenswrapper[5125]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Dec 08 19:30:21 crc kubenswrapper[5125]: exit 1 Dec 08 19:30:21 crc kubenswrapper[5125]: fi Dec 08 19:30:21 crc kubenswrapper[5125]: Dec 08 19:30:21 crc kubenswrapper[5125]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Dec 08 19:30:21 crc kubenswrapper[5125]: exec /usr/bin/ovnkube \ Dec 08 19:30:21 crc kubenswrapper[5125]: --enable-interconnect \ Dec 08 19:30:21 crc kubenswrapper[5125]: --init-cluster-manager "${K8S_NODE}" \ Dec 08 19:30:21 crc kubenswrapper[5125]: --config-file=/run/ovnkube-config/ovnkube.conf \ Dec 08 19:30:21 crc kubenswrapper[5125]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Dec 08 19:30:21 crc kubenswrapper[5125]: --metrics-bind-address "127.0.0.1:29108" \ Dec 08 19:30:21 crc kubenswrapper[5125]: --metrics-enable-pprof \ Dec 08 19:30:21 crc kubenswrapper[5125]: --metrics-enable-config-duration \ Dec 08 19:30:21 crc kubenswrapper[5125]: ${ovn_v4_join_subnet_opt} \ Dec 08 19:30:21 crc kubenswrapper[5125]: ${ovn_v6_join_subnet_opt} \ Dec 08 19:30:21 crc kubenswrapper[5125]: ${ovn_v4_transit_switch_subnet_opt} \ Dec 08 19:30:21 crc kubenswrapper[5125]: ${ovn_v6_transit_switch_subnet_opt} \ Dec 08 19:30:21 crc kubenswrapper[5125]: ${dns_name_resolver_enabled_flag} \ Dec 08 19:30:21 crc kubenswrapper[5125]: ${persistent_ips_enabled_flag} \ Dec 08 19:30:21 crc kubenswrapper[5125]: ${multi_network_enabled_flag} \ Dec 08 19:30:21 crc kubenswrapper[5125]: ${network_segmentation_enabled_flag} \ Dec 08 19:30:21 crc kubenswrapper[5125]: ${gateway_mode_flags} \ Dec 08 19:30:21 crc kubenswrapper[5125]: ${route_advertisements_enable_flag} \ Dec 08 19:30:21 crc kubenswrapper[5125]: ${preconfigured_udn_addresses_enable_flag} \ Dec 08 19:30:21 crc kubenswrapper[5125]: --enable-egress-ip=true \ Dec 08 19:30:21 crc kubenswrapper[5125]: --enable-egress-firewall=true \ Dec 08 19:30:21 crc kubenswrapper[5125]: --enable-egress-qos=true \ Dec 08 19:30:21 crc kubenswrapper[5125]: --enable-egress-service=true \ Dec 08 19:30:21 crc kubenswrapper[5125]: --enable-multicast \ Dec 08 19:30:21 crc kubenswrapper[5125]: --enable-multi-external-gateway=true \ Dec 08 19:30:21 crc kubenswrapper[5125]: ${multi_network_policy_enabled_flag} \ Dec 08 19:30:21 crc kubenswrapper[5125]: ${admin_network_policy_enabled_flag} Dec 08 19:30:21 crc kubenswrapper[5125]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-twvrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-w8mbx_openshift-ovn-kubernetes(48d0e864-6620-4a75-baa4-8653836f3aab): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:21 crc kubenswrapper[5125]: > logger="UnhandledError" Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.152844 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" podUID="48d0e864-6620-4a75-baa4-8653836f3aab" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.155096 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-txvvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afa3059b-1744-4855-ab93-3133529920d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptppk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-txvvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.166138 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjgzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25c18b2-98b7-4c40-a059-08f4821dea99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjgzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.175393 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-txvvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afa3059b-1744-4855-ab93-3133529920d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptppk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-txvvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.187186 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a65da2-1f6c-4d8c-9235-319e35ed53e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://a5e4699670d62181c1fafae8281271f7dd7e3a3694a21aa85a0431dc61994c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d33cb163457c854b355765916b3c29d258a9b0db805a51c89bd221aba35fb12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8c37e3585615ba4ff1e0e7d348bf306b89181474b72aebe5290f9cf2a9c706d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1208 19:30:12.581927 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 19:30:12.582093 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 19:30:12.582975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1705152817/tls.crt::/tmp/serving-cert-1705152817/tls.key\\\\\\\"\\\\nI1208 19:30:13.192261 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 19:30:13.193899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 19:30:13.193911 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 19:30:13.193933 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 19:30:13.193938 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 19:30:13.196934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 19:30:13.196955 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 19:30:13.196960 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.196966 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.196970 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 19:30:13.196973 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 19:30:13.196975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 19:30:13.196978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 19:30:13.198675 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://be7cc8d52376599fa6e20ccc45f43544f765f5d0ca901360045e14c3441a4c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.196158 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.196190 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.196200 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.196242 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.196252 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:21Z","lastTransitionTime":"2025-12-08T19:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.200107 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.224412 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a16dd26-4f2d-422b-a3e7-459ca70d7925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://e9ed6b4f2152ebdc1484f71e24ba072cbf2b01f9d9feba86cfb7389754fdec5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://dffc632ffcdfed24afccbe6a28e61941232e1cd2efcbafd1f092ab148c0c1697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1b8499c0a2bf34333f40c474c394b71a76350a7fc194553cf807f2d5faa889c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd518b12329a228d3ba235314af632769596b1ca8a854f2caf622b9c3847816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8976fcbc73296c5af4cb1d7b4056d864b7d2cae6c8b19dc656ba85a228d2d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.235487 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.245192 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.254904 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.271451 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aabf1825-0c19-45de-9f9e-fe94777752e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k9whn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.280116 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8cea827-b8e3-4d92-adea-df0afd2397da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c9bz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c9bz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slhjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.297808 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.297876 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.297897 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.297920 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.297938 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:21Z","lastTransitionTime":"2025-12-08T19:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.320505 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9p7g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b938d768-ccce-45a6-a982-3f5d6f1a7d98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzwqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p7g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.355973 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2309c211-00a6-48e5-b99d-349b71a11862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caca8af5e19887a7e6708058ea051494b18a37f74e2c31cc984ee9e38f34a397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.365196 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.365305 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.365346 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.365396 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.365515 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:30:22.365470839 +0000 UTC m=+79.135961173 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.365589 5125 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.365719 5125 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.365752 5125 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.365768 5125 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.365768 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:22.365737606 +0000 UTC m=+79.136227920 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.365844 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:22.365823589 +0000 UTC m=+79.136313873 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.366021 5125 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.366171 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:22.366130377 +0000 UTC m=+79.136620651 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.399974 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.400040 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.400061 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.400087 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.400105 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:21Z","lastTransitionTime":"2025-12-08T19:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.402822 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7be318f-1e5a-4c9b-aff6-a0d7423fb520\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://51dd4ebaac488ab269d08cb3c6bd1ab70695582228b86f0ee98bcf2efe730911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d1a6ee7cc39cbce21b5d44e71db4af1388154261b0f4e46bf80a1c6aace1d18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6be3cefe94889f1e79893ae2e0cbc2c0e19b158c8b5d1fc78c2396198cdf1b63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b524051750cb775841e22d8cd5239926fb9dbb19325e7c8e9d0593caeab1da19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.440766 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.466859 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a677937-278d-4989-b196-40d5daba436d-metrics-certs\") pod \"network-metrics-daemon-7lwbz\" (UID: \"9a677937-278d-4989-b196-40d5daba436d\") " pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.466940 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.467099 5125 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.467243 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a677937-278d-4989-b196-40d5daba436d-metrics-certs podName:9a677937-278d-4989-b196-40d5daba436d nodeName:}" failed. No retries permitted until 2025-12-08 19:30:22.467215468 +0000 UTC m=+79.237705772 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9a677937-278d-4989-b196-40d5daba436d-metrics-certs") pod "network-metrics-daemon-7lwbz" (UID: "9a677937-278d-4989-b196-40d5daba436d") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.467241 5125 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.467288 5125 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.467305 5125 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.467393 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:22.467354881 +0000 UTC m=+79.237845185 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.475829 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.502553 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.502593 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.502602 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.502632 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.502644 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:21Z","lastTransitionTime":"2025-12-08T19:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.516817 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7lwbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a677937-278d-4989-b196-40d5daba436d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8qzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8qzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7lwbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.555876 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d0e864-6620-4a75-baa4-8653836f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-w8mbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.579102 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.579198 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.579215 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.579232 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.579246 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:21Z","lastTransitionTime":"2025-12-08T19:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.593963 5125 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc970274-9f45-4e00-af2e-908ff2f74194\\\",\\\"systemUUID\\\":\\\"3204b44a-5260-4c04-b0d1-92575bcb7d69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.596326 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd8c208-b235-420d-aa03-61fb487f40bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45dfdf1c59b5fb6c4c2329c90a050ab925412e0e70f48b865bbd4261ba6cf841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://df8ae2ed1ee6f83e167f23dd7edc5eaf5e881de6ea7d042f3d4184090b0cf6be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb9c33205053ee254860f931fb8051f331e26827a53bee03ec0451ad1c36124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.597927 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.597964 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.597978 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.597996 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.598008 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:21Z","lastTransitionTime":"2025-12-08T19:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.612054 5125 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc970274-9f45-4e00-af2e-908ff2f74194\\\",\\\"systemUUID\\\":\\\"3204b44a-5260-4c04-b0d1-92575bcb7d69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.615487 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.615535 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.615549 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.615567 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.615581 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:21Z","lastTransitionTime":"2025-12-08T19:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.625185 5125 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc970274-9f45-4e00-af2e-908ff2f74194\\\",\\\"systemUUID\\\":\\\"3204b44a-5260-4c04-b0d1-92575bcb7d69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.628976 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.629000 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.629011 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.629027 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.629038 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:21Z","lastTransitionTime":"2025-12-08T19:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.637283 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjj2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05229a97-6cb6-4842-9ec3-f68831b2daf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdnq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjj2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.638533 5125 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc970274-9f45-4e00-af2e-908ff2f74194\\\",\\\"systemUUID\\\":\\\"3204b44a-5260-4c04-b0d1-92575bcb7d69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.641789 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.641846 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.641857 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.641875 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.641886 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:21Z","lastTransitionTime":"2025-12-08T19:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.651553 5125 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc970274-9f45-4e00-af2e-908ff2f74194\\\",\\\"systemUUID\\\":\\\"3204b44a-5260-4c04-b0d1-92575bcb7d69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.651843 5125 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.653018 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.653091 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.653117 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.653152 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.653176 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:21Z","lastTransitionTime":"2025-12-08T19:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.755708 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.755785 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.755806 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.755830 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.755849 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:21Z","lastTransitionTime":"2025-12-08T19:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.772734 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.773602 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.775459 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.776850 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.778732 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.781943 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.783507 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.784543 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.785688 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.786692 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.788479 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.789630 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.790852 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.792027 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.794396 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.795897 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.797174 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.799664 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.801168 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.806536 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.808704 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.812838 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.817554 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.819177 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.820900 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.822038 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.824008 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.825828 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.831328 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.833512 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.835782 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.838710 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.842778 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.844972 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.847994 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.850256 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.852572 5125 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.852983 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.860106 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.860179 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.860205 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.860239 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.860264 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:21Z","lastTransitionTime":"2025-12-08T19:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.860538 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.863358 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.865266 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.868035 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.869378 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.872421 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.874134 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.875245 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.877696 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.879907 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.882760 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.884441 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.886585 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.888067 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.890382 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.892763 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.896421 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.897935 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.900358 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.901376 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.902840 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:21 crc kubenswrapper[5125]: E1208 19:30:21.902958 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.962412 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.962484 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.962504 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.962529 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:21 crc kubenswrapper[5125]: I1208 19:30:21.962547 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:21Z","lastTransitionTime":"2025-12-08T19:30:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.002248 5125 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.003716 5125 scope.go:117] "RemoveContainer" containerID="346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af" Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.004079 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.035428 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.065529 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.065564 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.065576 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.065591 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.065604 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:22Z","lastTransitionTime":"2025-12-08T19:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.068269 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9p7g8" event={"ID":"b938d768-ccce-45a6-a982-3f5d6f1a7d98","Type":"ContainerStarted","Data":"888363a45ff23dde6aa1abeb94c08396d5d6d929b89046912465b1ccc22ca7d7"} Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.069507 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"916c7f3924842bd52ada7f6a194d107abdb621023a2c95f4d0a892706b36c166"} Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.071509 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"3821d9c69b8cf4e36407de80aa59e65577f4c20684a08b3b54376954965c6f0a"} Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.071764 5125 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.072214 5125 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:22 crc kubenswrapper[5125]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 08 19:30:22 crc kubenswrapper[5125]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 08 19:30:22 crc kubenswrapper[5125]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nzwqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-9p7g8_openshift-multus(b938d768-ccce-45a6-a982-3f5d6f1a7d98): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:22 crc kubenswrapper[5125]: > logger="UnhandledError" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.072635 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" event={"ID":"d8cea827-b8e3-4d92-adea-df0afd2397da","Type":"ContainerStarted","Data":"b48e14fd759766338880a929aff25ced2e8b714e099940b34ce012a20f2013c3"} Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.072934 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.073325 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-9p7g8" podUID="b938d768-ccce-45a6-a982-3f5d6f1a7d98" Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.074252 5125 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:22 crc kubenswrapper[5125]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 19:30:22 crc kubenswrapper[5125]: if [[ -f "/env/_master" ]]; then Dec 08 19:30:22 crc kubenswrapper[5125]: set -o allexport Dec 08 19:30:22 crc kubenswrapper[5125]: source "/env/_master" Dec 08 19:30:22 crc kubenswrapper[5125]: set +o allexport Dec 08 19:30:22 crc kubenswrapper[5125]: fi Dec 08 19:30:22 crc kubenswrapper[5125]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 08 19:30:22 crc kubenswrapper[5125]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 08 19:30:22 crc kubenswrapper[5125]: ho_enable="--enable-hybrid-overlay" Dec 08 19:30:22 crc kubenswrapper[5125]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 08 19:30:22 crc kubenswrapper[5125]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 08 19:30:22 crc kubenswrapper[5125]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 08 19:30:22 crc kubenswrapper[5125]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 08 19:30:22 crc kubenswrapper[5125]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 08 19:30:22 crc kubenswrapper[5125]: --webhook-host=127.0.0.1 \ Dec 08 19:30:22 crc kubenswrapper[5125]: --webhook-port=9743 \ Dec 08 19:30:22 crc kubenswrapper[5125]: ${ho_enable} \ Dec 08 19:30:22 crc kubenswrapper[5125]: --enable-interconnect \ Dec 08 19:30:22 crc kubenswrapper[5125]: --disable-approver \ Dec 08 19:30:22 crc kubenswrapper[5125]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 08 19:30:22 crc kubenswrapper[5125]: --wait-for-kubernetes-api=200s \ Dec 08 19:30:22 crc kubenswrapper[5125]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 08 19:30:22 crc kubenswrapper[5125]: --loglevel="${LOGLEVEL}" Dec 08 19:30:22 crc kubenswrapper[5125]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:22 crc kubenswrapper[5125]: > logger="UnhandledError" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.074345 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjgzs" event={"ID":"e25c18b2-98b7-4c40-a059-08f4821dea99","Type":"ContainerStarted","Data":"64de6debf75a81c0c5b9e824d7ffc85b1ce5a02e7577581f135c57f2d213aadc"} Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.075725 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" event={"ID":"aabf1825-0c19-45de-9f9e-fe94777752e6","Type":"ContainerStarted","Data":"16a138870cb1cb6faefb39f54dd2ff08c6cb551426f96e4cbb951d7d47850407"} Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.076425 5125 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rmsnc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-rjgzs_openshift-multus(e25c18b2-98b7-4c40-a059-08f4821dea99): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.076667 5125 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:22 crc kubenswrapper[5125]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 19:30:22 crc kubenswrapper[5125]: if [[ -f "/env/_master" ]]; then Dec 08 19:30:22 crc kubenswrapper[5125]: set -o allexport Dec 08 19:30:22 crc kubenswrapper[5125]: source "/env/_master" Dec 08 19:30:22 crc kubenswrapper[5125]: set +o allexport Dec 08 19:30:22 crc kubenswrapper[5125]: fi Dec 08 19:30:22 crc kubenswrapper[5125]: Dec 08 19:30:22 crc kubenswrapper[5125]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 08 19:30:22 crc kubenswrapper[5125]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 08 19:30:22 crc kubenswrapper[5125]: --disable-webhook \ Dec 08 19:30:22 crc kubenswrapper[5125]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 08 19:30:22 crc kubenswrapper[5125]: --loglevel="${LOGLEVEL}" Dec 08 19:30:22 crc kubenswrapper[5125]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:22 crc kubenswrapper[5125]: > logger="UnhandledError" Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.076830 5125 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4c9bz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-slhjr_openshift-machine-config-operator(d8cea827-b8e3-4d92-adea-df0afd2397da): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.077216 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" event={"ID":"48d0e864-6620-4a75-baa4-8653836f3aab","Type":"ContainerStarted","Data":"6c72e721e2a8d7fcc34cc083b0dbe02e8e032b636028e0a263c07f2463f10d25"} Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.077641 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-rjgzs" podUID="e25c18b2-98b7-4c40-a059-08f4821dea99" Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.077790 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.078075 5125 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:22 crc kubenswrapper[5125]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 08 19:30:22 crc kubenswrapper[5125]: apiVersion: v1 Dec 08 19:30:22 crc kubenswrapper[5125]: clusters: Dec 08 19:30:22 crc kubenswrapper[5125]: - cluster: Dec 08 19:30:22 crc kubenswrapper[5125]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 08 19:30:22 crc kubenswrapper[5125]: server: https://api-int.crc.testing:6443 Dec 08 19:30:22 crc kubenswrapper[5125]: name: default-cluster Dec 08 19:30:22 crc kubenswrapper[5125]: contexts: Dec 08 19:30:22 crc kubenswrapper[5125]: - context: Dec 08 19:30:22 crc kubenswrapper[5125]: cluster: default-cluster Dec 08 19:30:22 crc kubenswrapper[5125]: namespace: default Dec 08 19:30:22 crc kubenswrapper[5125]: user: default-auth Dec 08 19:30:22 crc kubenswrapper[5125]: name: default-context Dec 08 19:30:22 crc kubenswrapper[5125]: current-context: default-context Dec 08 19:30:22 crc kubenswrapper[5125]: kind: Config Dec 08 19:30:22 crc kubenswrapper[5125]: preferences: {} Dec 08 19:30:22 crc kubenswrapper[5125]: users: Dec 08 19:30:22 crc kubenswrapper[5125]: - name: default-auth Dec 08 19:30:22 crc kubenswrapper[5125]: user: Dec 08 19:30:22 crc kubenswrapper[5125]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 19:30:22 crc kubenswrapper[5125]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 19:30:22 crc kubenswrapper[5125]: EOF Dec 08 19:30:22 crc kubenswrapper[5125]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-42xvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-k9whn_openshift-ovn-kubernetes(aabf1825-0c19-45de-9f9e-fe94777752e6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:22 crc kubenswrapper[5125]: > logger="UnhandledError" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.078810 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-txvvl" event={"ID":"afa3059b-1744-4855-ab93-3133529920d5","Type":"ContainerStarted","Data":"2908e4786a35dd50430dc1186278f5de10695556b76b31764b0ebd2c9a21a872"} Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.079166 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.079598 5125 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:22 crc kubenswrapper[5125]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Dec 08 19:30:22 crc kubenswrapper[5125]: set -euo pipefail Dec 08 19:30:22 crc kubenswrapper[5125]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Dec 08 19:30:22 crc kubenswrapper[5125]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Dec 08 19:30:22 crc kubenswrapper[5125]: # As the secret mount is optional we must wait for the files to be present. Dec 08 19:30:22 crc kubenswrapper[5125]: # The service is created in monitor.yaml and this is created in sdn.yaml. Dec 08 19:30:22 crc kubenswrapper[5125]: TS=$(date +%s) Dec 08 19:30:22 crc kubenswrapper[5125]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Dec 08 19:30:22 crc kubenswrapper[5125]: HAS_LOGGED_INFO=0 Dec 08 19:30:22 crc kubenswrapper[5125]: Dec 08 19:30:22 crc kubenswrapper[5125]: log_missing_certs(){ Dec 08 19:30:22 crc kubenswrapper[5125]: CUR_TS=$(date +%s) Dec 08 19:30:22 crc kubenswrapper[5125]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Dec 08 19:30:22 crc kubenswrapper[5125]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Dec 08 19:30:22 crc kubenswrapper[5125]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Dec 08 19:30:22 crc kubenswrapper[5125]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Dec 08 19:30:22 crc kubenswrapper[5125]: HAS_LOGGED_INFO=1 Dec 08 19:30:22 crc kubenswrapper[5125]: fi Dec 08 19:30:22 crc kubenswrapper[5125]: } Dec 08 19:30:22 crc kubenswrapper[5125]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Dec 08 19:30:22 crc kubenswrapper[5125]: log_missing_certs Dec 08 19:30:22 crc kubenswrapper[5125]: sleep 5 Dec 08 19:30:22 crc kubenswrapper[5125]: done Dec 08 19:30:22 crc kubenswrapper[5125]: Dec 08 19:30:22 crc kubenswrapper[5125]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Dec 08 19:30:22 crc kubenswrapper[5125]: exec /usr/bin/kube-rbac-proxy \ Dec 08 19:30:22 crc kubenswrapper[5125]: --logtostderr \ Dec 08 19:30:22 crc kubenswrapper[5125]: --secure-listen-address=:9108 \ Dec 08 19:30:22 crc kubenswrapper[5125]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Dec 08 19:30:22 crc kubenswrapper[5125]: --upstream=http://127.0.0.1:29108/ \ Dec 08 19:30:22 crc kubenswrapper[5125]: --tls-private-key-file=${TLS_PK} \ Dec 08 19:30:22 crc kubenswrapper[5125]: --tls-cert-file=${TLS_CERT} Dec 08 19:30:22 crc kubenswrapper[5125]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-twvrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-w8mbx_openshift-ovn-kubernetes(48d0e864-6620-4a75-baa4-8653836f3aab): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:22 crc kubenswrapper[5125]: > logger="UnhandledError" Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.080479 5125 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4c9bz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-slhjr_openshift-machine-config-operator(d8cea827-b8e3-4d92-adea-df0afd2397da): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.080495 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jjj2h" event={"ID":"05229a97-6cb6-4842-9ec3-f68831b2daf5","Type":"ContainerStarted","Data":"9bfc98fddaaaa7c99982a1333b370b3e495263b100432bba326ede48847a0f41"} Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.081096 5125 scope.go:117] "RemoveContainer" containerID="346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af" Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.081255 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.081809 5125 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:22 crc kubenswrapper[5125]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Dec 08 19:30:22 crc kubenswrapper[5125]: set -uo pipefail Dec 08 19:30:22 crc kubenswrapper[5125]: Dec 08 19:30:22 crc kubenswrapper[5125]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 08 19:30:22 crc kubenswrapper[5125]: Dec 08 19:30:22 crc kubenswrapper[5125]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 08 19:30:22 crc kubenswrapper[5125]: HOSTS_FILE="/etc/hosts" Dec 08 19:30:22 crc kubenswrapper[5125]: TEMP_FILE="/tmp/hosts.tmp" Dec 08 19:30:22 crc kubenswrapper[5125]: Dec 08 19:30:22 crc kubenswrapper[5125]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 08 19:30:22 crc kubenswrapper[5125]: Dec 08 19:30:22 crc kubenswrapper[5125]: # Make a temporary file with the old hosts file's attributes. Dec 08 19:30:22 crc kubenswrapper[5125]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 08 19:30:22 crc kubenswrapper[5125]: echo "Failed to preserve hosts file. Exiting." Dec 08 19:30:22 crc kubenswrapper[5125]: exit 1 Dec 08 19:30:22 crc kubenswrapper[5125]: fi Dec 08 19:30:22 crc kubenswrapper[5125]: Dec 08 19:30:22 crc kubenswrapper[5125]: while true; do Dec 08 19:30:22 crc kubenswrapper[5125]: declare -A svc_ips Dec 08 19:30:22 crc kubenswrapper[5125]: for svc in "${services[@]}"; do Dec 08 19:30:22 crc kubenswrapper[5125]: # Fetch service IP from cluster dns if present. We make several tries Dec 08 19:30:22 crc kubenswrapper[5125]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 08 19:30:22 crc kubenswrapper[5125]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 08 19:30:22 crc kubenswrapper[5125]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 08 19:30:22 crc kubenswrapper[5125]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 19:30:22 crc kubenswrapper[5125]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 19:30:22 crc kubenswrapper[5125]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 19:30:22 crc kubenswrapper[5125]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 08 19:30:22 crc kubenswrapper[5125]: for i in ${!cmds[*]} Dec 08 19:30:22 crc kubenswrapper[5125]: do Dec 08 19:30:22 crc kubenswrapper[5125]: ips=($(eval "${cmds[i]}")) Dec 08 19:30:22 crc kubenswrapper[5125]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 08 19:30:22 crc kubenswrapper[5125]: svc_ips["${svc}"]="${ips[@]}" Dec 08 19:30:22 crc kubenswrapper[5125]: break Dec 08 19:30:22 crc kubenswrapper[5125]: fi Dec 08 19:30:22 crc kubenswrapper[5125]: done Dec 08 19:30:22 crc kubenswrapper[5125]: done Dec 08 19:30:22 crc kubenswrapper[5125]: Dec 08 19:30:22 crc kubenswrapper[5125]: # Update /etc/hosts only if we get valid service IPs Dec 08 19:30:22 crc kubenswrapper[5125]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 08 19:30:22 crc kubenswrapper[5125]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 08 19:30:22 crc kubenswrapper[5125]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 08 19:30:22 crc kubenswrapper[5125]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 08 19:30:22 crc kubenswrapper[5125]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 08 19:30:22 crc kubenswrapper[5125]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 08 19:30:22 crc kubenswrapper[5125]: sleep 60 & wait Dec 08 19:30:22 crc kubenswrapper[5125]: continue Dec 08 19:30:22 crc kubenswrapper[5125]: fi Dec 08 19:30:22 crc kubenswrapper[5125]: Dec 08 19:30:22 crc kubenswrapper[5125]: # Append resolver entries for services Dec 08 19:30:22 crc kubenswrapper[5125]: rc=0 Dec 08 19:30:22 crc kubenswrapper[5125]: for svc in "${!svc_ips[@]}"; do Dec 08 19:30:22 crc kubenswrapper[5125]: for ip in ${svc_ips[${svc}]}; do Dec 08 19:30:22 crc kubenswrapper[5125]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 08 19:30:22 crc kubenswrapper[5125]: done Dec 08 19:30:22 crc kubenswrapper[5125]: done Dec 08 19:30:22 crc kubenswrapper[5125]: if [[ $rc -ne 0 ]]; then Dec 08 19:30:22 crc kubenswrapper[5125]: sleep 60 & wait Dec 08 19:30:22 crc kubenswrapper[5125]: continue Dec 08 19:30:22 crc kubenswrapper[5125]: fi Dec 08 19:30:22 crc kubenswrapper[5125]: Dec 08 19:30:22 crc kubenswrapper[5125]: Dec 08 19:30:22 crc kubenswrapper[5125]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 08 19:30:22 crc kubenswrapper[5125]: # Replace /etc/hosts with our modified version if needed Dec 08 19:30:22 crc kubenswrapper[5125]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 08 19:30:22 crc kubenswrapper[5125]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 08 19:30:22 crc kubenswrapper[5125]: fi Dec 08 19:30:22 crc kubenswrapper[5125]: sleep 60 & wait Dec 08 19:30:22 crc kubenswrapper[5125]: unset svc_ips Dec 08 19:30:22 crc kubenswrapper[5125]: done Dec 08 19:30:22 crc kubenswrapper[5125]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ptppk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-txvvl_openshift-dns(afa3059b-1744-4855-ab93-3133529920d5): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:22 crc kubenswrapper[5125]: > logger="UnhandledError" Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.081873 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" podUID="d8cea827-b8e3-4d92-adea-df0afd2397da" Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.082799 5125 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:22 crc kubenswrapper[5125]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 19:30:22 crc kubenswrapper[5125]: if [[ -f "/env/_master" ]]; then Dec 08 19:30:22 crc kubenswrapper[5125]: set -o allexport Dec 08 19:30:22 crc kubenswrapper[5125]: source "/env/_master" Dec 08 19:30:22 crc kubenswrapper[5125]: set +o allexport Dec 08 19:30:22 crc kubenswrapper[5125]: fi Dec 08 19:30:22 crc kubenswrapper[5125]: Dec 08 19:30:22 crc kubenswrapper[5125]: ovn_v4_join_subnet_opt= Dec 08 19:30:22 crc kubenswrapper[5125]: if [[ "" != "" ]]; then Dec 08 19:30:22 crc kubenswrapper[5125]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Dec 08 19:30:22 crc kubenswrapper[5125]: fi Dec 08 19:30:22 crc kubenswrapper[5125]: ovn_v6_join_subnet_opt= Dec 08 19:30:22 crc kubenswrapper[5125]: if [[ "" != "" ]]; then Dec 08 19:30:22 crc kubenswrapper[5125]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Dec 08 19:30:22 crc kubenswrapper[5125]: fi Dec 08 19:30:22 crc kubenswrapper[5125]: Dec 08 19:30:22 crc kubenswrapper[5125]: ovn_v4_transit_switch_subnet_opt= Dec 08 19:30:22 crc kubenswrapper[5125]: if [[ "" != "" ]]; then Dec 08 19:30:22 crc kubenswrapper[5125]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Dec 08 19:30:22 crc kubenswrapper[5125]: fi Dec 08 19:30:22 crc kubenswrapper[5125]: ovn_v6_transit_switch_subnet_opt= Dec 08 19:30:22 crc kubenswrapper[5125]: if [[ "" != "" ]]; then Dec 08 19:30:22 crc kubenswrapper[5125]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Dec 08 19:30:22 crc kubenswrapper[5125]: fi Dec 08 19:30:22 crc kubenswrapper[5125]: Dec 08 19:30:22 crc kubenswrapper[5125]: dns_name_resolver_enabled_flag= Dec 08 19:30:22 crc kubenswrapper[5125]: if [[ "false" == "true" ]]; then Dec 08 19:30:22 crc kubenswrapper[5125]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Dec 08 19:30:22 crc kubenswrapper[5125]: fi Dec 08 19:30:22 crc kubenswrapper[5125]: Dec 08 19:30:22 crc kubenswrapper[5125]: persistent_ips_enabled_flag="--enable-persistent-ips" Dec 08 19:30:22 crc kubenswrapper[5125]: Dec 08 19:30:22 crc kubenswrapper[5125]: # This is needed so that converting clusters from GA to TP Dec 08 19:30:22 crc kubenswrapper[5125]: # will rollout control plane pods as well Dec 08 19:30:22 crc kubenswrapper[5125]: network_segmentation_enabled_flag= Dec 08 19:30:22 crc kubenswrapper[5125]: multi_network_enabled_flag= Dec 08 19:30:22 crc kubenswrapper[5125]: if [[ "true" == "true" ]]; then Dec 08 19:30:22 crc kubenswrapper[5125]: multi_network_enabled_flag="--enable-multi-network" Dec 08 19:30:22 crc kubenswrapper[5125]: fi Dec 08 19:30:22 crc kubenswrapper[5125]: if [[ "true" == "true" ]]; then Dec 08 19:30:22 crc kubenswrapper[5125]: if [[ "true" != "true" ]]; then Dec 08 19:30:22 crc kubenswrapper[5125]: multi_network_enabled_flag="--enable-multi-network" Dec 08 19:30:22 crc kubenswrapper[5125]: fi Dec 08 19:30:22 crc kubenswrapper[5125]: network_segmentation_enabled_flag="--enable-network-segmentation" Dec 08 19:30:22 crc kubenswrapper[5125]: fi Dec 08 19:30:22 crc kubenswrapper[5125]: Dec 08 19:30:22 crc kubenswrapper[5125]: route_advertisements_enable_flag= Dec 08 19:30:22 crc kubenswrapper[5125]: if [[ "false" == "true" ]]; then Dec 08 19:30:22 crc kubenswrapper[5125]: route_advertisements_enable_flag="--enable-route-advertisements" Dec 08 19:30:22 crc kubenswrapper[5125]: fi Dec 08 19:30:22 crc kubenswrapper[5125]: Dec 08 19:30:22 crc kubenswrapper[5125]: preconfigured_udn_addresses_enable_flag= Dec 08 19:30:22 crc kubenswrapper[5125]: if [[ "false" == "true" ]]; then Dec 08 19:30:22 crc kubenswrapper[5125]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Dec 08 19:30:22 crc kubenswrapper[5125]: fi Dec 08 19:30:22 crc kubenswrapper[5125]: Dec 08 19:30:22 crc kubenswrapper[5125]: # Enable multi-network policy if configured (control-plane always full mode) Dec 08 19:30:22 crc kubenswrapper[5125]: multi_network_policy_enabled_flag= Dec 08 19:30:22 crc kubenswrapper[5125]: if [[ "false" == "true" ]]; then Dec 08 19:30:22 crc kubenswrapper[5125]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Dec 08 19:30:22 crc kubenswrapper[5125]: fi Dec 08 19:30:22 crc kubenswrapper[5125]: Dec 08 19:30:22 crc kubenswrapper[5125]: # Enable admin network policy if configured (control-plane always full mode) Dec 08 19:30:22 crc kubenswrapper[5125]: admin_network_policy_enabled_flag= Dec 08 19:30:22 crc kubenswrapper[5125]: if [[ "true" == "true" ]]; then Dec 08 19:30:22 crc kubenswrapper[5125]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Dec 08 19:30:22 crc kubenswrapper[5125]: fi Dec 08 19:30:22 crc kubenswrapper[5125]: Dec 08 19:30:22 crc kubenswrapper[5125]: if [ "shared" == "shared" ]; then Dec 08 19:30:22 crc kubenswrapper[5125]: gateway_mode_flags="--gateway-mode shared" Dec 08 19:30:22 crc kubenswrapper[5125]: elif [ "shared" == "local" ]; then Dec 08 19:30:22 crc kubenswrapper[5125]: gateway_mode_flags="--gateway-mode local" Dec 08 19:30:22 crc kubenswrapper[5125]: else Dec 08 19:30:22 crc kubenswrapper[5125]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Dec 08 19:30:22 crc kubenswrapper[5125]: exit 1 Dec 08 19:30:22 crc kubenswrapper[5125]: fi Dec 08 19:30:22 crc kubenswrapper[5125]: Dec 08 19:30:22 crc kubenswrapper[5125]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Dec 08 19:30:22 crc kubenswrapper[5125]: exec /usr/bin/ovnkube \ Dec 08 19:30:22 crc kubenswrapper[5125]: --enable-interconnect \ Dec 08 19:30:22 crc kubenswrapper[5125]: --init-cluster-manager "${K8S_NODE}" \ Dec 08 19:30:22 crc kubenswrapper[5125]: --config-file=/run/ovnkube-config/ovnkube.conf \ Dec 08 19:30:22 crc kubenswrapper[5125]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Dec 08 19:30:22 crc kubenswrapper[5125]: --metrics-bind-address "127.0.0.1:29108" \ Dec 08 19:30:22 crc kubenswrapper[5125]: --metrics-enable-pprof \ Dec 08 19:30:22 crc kubenswrapper[5125]: --metrics-enable-config-duration \ Dec 08 19:30:22 crc kubenswrapper[5125]: ${ovn_v4_join_subnet_opt} \ Dec 08 19:30:22 crc kubenswrapper[5125]: ${ovn_v6_join_subnet_opt} \ Dec 08 19:30:22 crc kubenswrapper[5125]: ${ovn_v4_transit_switch_subnet_opt} \ Dec 08 19:30:22 crc kubenswrapper[5125]: ${ovn_v6_transit_switch_subnet_opt} \ Dec 08 19:30:22 crc kubenswrapper[5125]: ${dns_name_resolver_enabled_flag} \ Dec 08 19:30:22 crc kubenswrapper[5125]: ${persistent_ips_enabled_flag} \ Dec 08 19:30:22 crc kubenswrapper[5125]: ${multi_network_enabled_flag} \ Dec 08 19:30:22 crc kubenswrapper[5125]: ${network_segmentation_enabled_flag} \ Dec 08 19:30:22 crc kubenswrapper[5125]: ${gateway_mode_flags} \ Dec 08 19:30:22 crc kubenswrapper[5125]: ${route_advertisements_enable_flag} \ Dec 08 19:30:22 crc kubenswrapper[5125]: ${preconfigured_udn_addresses_enable_flag} \ Dec 08 19:30:22 crc kubenswrapper[5125]: --enable-egress-ip=true \ Dec 08 19:30:22 crc kubenswrapper[5125]: --enable-egress-firewall=true \ Dec 08 19:30:22 crc kubenswrapper[5125]: --enable-egress-qos=true \ Dec 08 19:30:22 crc kubenswrapper[5125]: --enable-egress-service=true \ Dec 08 19:30:22 crc kubenswrapper[5125]: --enable-multicast \ Dec 08 19:30:22 crc kubenswrapper[5125]: --enable-multi-external-gateway=true \ Dec 08 19:30:22 crc kubenswrapper[5125]: ${multi_network_policy_enabled_flag} \ Dec 08 19:30:22 crc kubenswrapper[5125]: ${admin_network_policy_enabled_flag} Dec 08 19:30:22 crc kubenswrapper[5125]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-twvrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-w8mbx_openshift-ovn-kubernetes(48d0e864-6620-4a75-baa4-8653836f3aab): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:22 crc kubenswrapper[5125]: > logger="UnhandledError" Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.083337 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-txvvl" podUID="afa3059b-1744-4855-ab93-3133529920d5" Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.083857 5125 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:22 crc kubenswrapper[5125]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 08 19:30:22 crc kubenswrapper[5125]: while [ true ]; Dec 08 19:30:22 crc kubenswrapper[5125]: do Dec 08 19:30:22 crc kubenswrapper[5125]: for f in $(ls /tmp/serviceca); do Dec 08 19:30:22 crc kubenswrapper[5125]: echo $f Dec 08 19:30:22 crc kubenswrapper[5125]: ca_file_path="/tmp/serviceca/${f}" Dec 08 19:30:22 crc kubenswrapper[5125]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 08 19:30:22 crc kubenswrapper[5125]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 08 19:30:22 crc kubenswrapper[5125]: if [ -e "${reg_dir_path}" ]; then Dec 08 19:30:22 crc kubenswrapper[5125]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 08 19:30:22 crc kubenswrapper[5125]: else Dec 08 19:30:22 crc kubenswrapper[5125]: mkdir $reg_dir_path Dec 08 19:30:22 crc kubenswrapper[5125]: cp $ca_file_path $reg_dir_path/ca.crt Dec 08 19:30:22 crc kubenswrapper[5125]: fi Dec 08 19:30:22 crc kubenswrapper[5125]: done Dec 08 19:30:22 crc kubenswrapper[5125]: for d in $(ls /etc/docker/certs.d); do Dec 08 19:30:22 crc kubenswrapper[5125]: echo $d Dec 08 19:30:22 crc kubenswrapper[5125]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 08 19:30:22 crc kubenswrapper[5125]: reg_conf_path="/tmp/serviceca/${dp}" Dec 08 19:30:22 crc kubenswrapper[5125]: if [ ! -e "${reg_conf_path}" ]; then Dec 08 19:30:22 crc kubenswrapper[5125]: rm -rf /etc/docker/certs.d/$d Dec 08 19:30:22 crc kubenswrapper[5125]: fi Dec 08 19:30:22 crc kubenswrapper[5125]: done Dec 08 19:30:22 crc kubenswrapper[5125]: sleep 60 & wait ${!} Dec 08 19:30:22 crc kubenswrapper[5125]: done Dec 08 19:30:22 crc kubenswrapper[5125]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jdnq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-jjj2h_openshift-image-registry(05229a97-6cb6-4842-9ec3-f68831b2daf5): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:22 crc kubenswrapper[5125]: > logger="UnhandledError" Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.083930 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" podUID="48d0e864-6620-4a75-baa4-8653836f3aab" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.083772 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a65da2-1f6c-4d8c-9235-319e35ed53e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://a5e4699670d62181c1fafae8281271f7dd7e3a3694a21aa85a0431dc61994c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d33cb163457c854b355765916b3c29d258a9b0db805a51c89bd221aba35fb12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8c37e3585615ba4ff1e0e7d348bf306b89181474b72aebe5290f9cf2a9c706d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1208 19:30:12.581927 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 19:30:12.582093 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 19:30:12.582975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1705152817/tls.crt::/tmp/serving-cert-1705152817/tls.key\\\\\\\"\\\\nI1208 19:30:13.192261 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 19:30:13.193899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 19:30:13.193911 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 19:30:13.193933 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 19:30:13.193938 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 19:30:13.196934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 19:30:13.196955 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 19:30:13.196960 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.196966 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.196970 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 19:30:13.196973 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 19:30:13.196975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 19:30:13.196978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 19:30:13.198675 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://be7cc8d52376599fa6e20ccc45f43544f765f5d0ca901360045e14c3441a4c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.085010 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-jjj2h" podUID="05229a97-6cb6-4842-9ec3-f68831b2daf5" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.095037 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.125739 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a16dd26-4f2d-422b-a3e7-459ca70d7925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://e9ed6b4f2152ebdc1484f71e24ba072cbf2b01f9d9feba86cfb7389754fdec5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://dffc632ffcdfed24afccbe6a28e61941232e1cd2efcbafd1f092ab148c0c1697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1b8499c0a2bf34333f40c474c394b71a76350a7fc194553cf807f2d5faa889c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd518b12329a228d3ba235314af632769596b1ca8a854f2caf622b9c3847816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8976fcbc73296c5af4cb1d7b4056d864b7d2cae6c8b19dc656ba85a228d2d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.136016 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.149144 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.161361 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.167394 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.167583 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.167720 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.167825 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.167956 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:22Z","lastTransitionTime":"2025-12-08T19:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.176541 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aabf1825-0c19-45de-9f9e-fe94777752e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k9whn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.184004 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8cea827-b8e3-4d92-adea-df0afd2397da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c9bz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c9bz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slhjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.191818 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9p7g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b938d768-ccce-45a6-a982-3f5d6f1a7d98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzwqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p7g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.198517 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2309c211-00a6-48e5-b99d-349b71a11862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caca8af5e19887a7e6708058ea051494b18a37f74e2c31cc984ee9e38f34a397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.209383 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7be318f-1e5a-4c9b-aff6-a0d7423fb520\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://51dd4ebaac488ab269d08cb3c6bd1ab70695582228b86f0ee98bcf2efe730911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d1a6ee7cc39cbce21b5d44e71db4af1388154261b0f4e46bf80a1c6aace1d18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6be3cefe94889f1e79893ae2e0cbc2c0e19b158c8b5d1fc78c2396198cdf1b63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b524051750cb775841e22d8cd5239926fb9dbb19325e7c8e9d0593caeab1da19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.217307 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.225130 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.231498 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7lwbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a677937-278d-4989-b196-40d5daba436d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8qzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8qzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7lwbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.239427 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d0e864-6620-4a75-baa4-8653836f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-w8mbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.269507 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.269546 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.269558 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.269576 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.269587 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:22Z","lastTransitionTime":"2025-12-08T19:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.274898 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd8c208-b235-420d-aa03-61fb487f40bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45dfdf1c59b5fb6c4c2329c90a050ab925412e0e70f48b865bbd4261ba6cf841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://df8ae2ed1ee6f83e167f23dd7edc5eaf5e881de6ea7d042f3d4184090b0cf6be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb9c33205053ee254860f931fb8051f331e26827a53bee03ec0451ad1c36124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.315772 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjj2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05229a97-6cb6-4842-9ec3-f68831b2daf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdnq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjj2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.364288 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjgzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25c18b2-98b7-4c40-a059-08f4821dea99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjgzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.371387 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.371444 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.371462 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.371487 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.371504 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:22Z","lastTransitionTime":"2025-12-08T19:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.376919 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.377100 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:30:24.37707075 +0000 UTC m=+81.147561044 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.377230 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.377285 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.377318 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.377396 5125 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.377423 5125 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.377441 5125 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.377453 5125 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.377460 5125 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.377514 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:24.377498212 +0000 UTC m=+81.147988506 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.377535 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:24.377526642 +0000 UTC m=+81.148016926 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.377549 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:24.377542483 +0000 UTC m=+81.148032777 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.397165 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-txvvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afa3059b-1744-4855-ab93-3133529920d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptppk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-txvvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.439577 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8cea827-b8e3-4d92-adea-df0afd2397da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c9bz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c9bz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slhjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.474256 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.474357 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.474378 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.474404 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.474423 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:22Z","lastTransitionTime":"2025-12-08T19:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.477995 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a677937-278d-4989-b196-40d5daba436d-metrics-certs\") pod \"network-metrics-daemon-7lwbz\" (UID: \"9a677937-278d-4989-b196-40d5daba436d\") " pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.478081 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.478192 5125 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.478269 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a677937-278d-4989-b196-40d5daba436d-metrics-certs podName:9a677937-278d-4989-b196-40d5daba436d nodeName:}" failed. No retries permitted until 2025-12-08 19:30:24.478248774 +0000 UTC m=+81.248739048 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9a677937-278d-4989-b196-40d5daba436d-metrics-certs") pod "network-metrics-daemon-7lwbz" (UID: "9a677937-278d-4989-b196-40d5daba436d") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.478341 5125 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.478366 5125 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.478382 5125 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.478439 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:24.478420128 +0000 UTC m=+81.248910422 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.483241 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9p7g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b938d768-ccce-45a6-a982-3f5d6f1a7d98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzwqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p7g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.518730 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2309c211-00a6-48e5-b99d-349b71a11862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caca8af5e19887a7e6708058ea051494b18a37f74e2c31cc984ee9e38f34a397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.564852 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7be318f-1e5a-4c9b-aff6-a0d7423fb520\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://51dd4ebaac488ab269d08cb3c6bd1ab70695582228b86f0ee98bcf2efe730911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d1a6ee7cc39cbce21b5d44e71db4af1388154261b0f4e46bf80a1c6aace1d18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6be3cefe94889f1e79893ae2e0cbc2c0e19b158c8b5d1fc78c2396198cdf1b63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b524051750cb775841e22d8cd5239926fb9dbb19325e7c8e9d0593caeab1da19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.576679 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.576761 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.576789 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.576821 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.576848 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:22Z","lastTransitionTime":"2025-12-08T19:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.602875 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.638597 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.678792 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.678845 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.678858 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.678878 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.678894 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:22Z","lastTransitionTime":"2025-12-08T19:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.680229 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7lwbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a677937-278d-4989-b196-40d5daba436d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8qzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8qzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7lwbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.716331 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d0e864-6620-4a75-baa4-8653836f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-w8mbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.757506 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd8c208-b235-420d-aa03-61fb487f40bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45dfdf1c59b5fb6c4c2329c90a050ab925412e0e70f48b865bbd4261ba6cf841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://df8ae2ed1ee6f83e167f23dd7edc5eaf5e881de6ea7d042f3d4184090b0cf6be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb9c33205053ee254860f931fb8051f331e26827a53bee03ec0451ad1c36124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.766707 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.766707 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.766891 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.766923 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.767074 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:22 crc kubenswrapper[5125]: E1208 19:30:22.767234 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7lwbz" podUID="9a677937-278d-4989-b196-40d5daba436d" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.785545 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.785640 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.785660 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.785686 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.785705 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:22Z","lastTransitionTime":"2025-12-08T19:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.798016 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjj2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05229a97-6cb6-4842-9ec3-f68831b2daf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdnq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjj2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.839357 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjgzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25c18b2-98b7-4c40-a059-08f4821dea99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjgzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.875174 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-txvvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afa3059b-1744-4855-ab93-3133529920d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptppk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-txvvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.887902 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.887946 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.887958 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.887974 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.887985 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:22Z","lastTransitionTime":"2025-12-08T19:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.920641 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a65da2-1f6c-4d8c-9235-319e35ed53e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://a5e4699670d62181c1fafae8281271f7dd7e3a3694a21aa85a0431dc61994c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d33cb163457c854b355765916b3c29d258a9b0db805a51c89bd221aba35fb12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8c37e3585615ba4ff1e0e7d348bf306b89181474b72aebe5290f9cf2a9c706d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1208 19:30:12.581927 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 19:30:12.582093 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 19:30:12.582975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1705152817/tls.crt::/tmp/serving-cert-1705152817/tls.key\\\\\\\"\\\\nI1208 19:30:13.192261 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 19:30:13.193899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 19:30:13.193911 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 19:30:13.193933 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 19:30:13.193938 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 19:30:13.196934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 19:30:13.196955 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 19:30:13.196960 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.196966 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.196970 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 19:30:13.196973 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 19:30:13.196975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 19:30:13.196978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 19:30:13.198675 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://be7cc8d52376599fa6e20ccc45f43544f765f5d0ca901360045e14c3441a4c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.958393 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.990753 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.990799 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.990811 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.990825 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:22 crc kubenswrapper[5125]: I1208 19:30:22.990851 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:22Z","lastTransitionTime":"2025-12-08T19:30:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.015641 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a16dd26-4f2d-422b-a3e7-459ca70d7925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://e9ed6b4f2152ebdc1484f71e24ba072cbf2b01f9d9feba86cfb7389754fdec5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://dffc632ffcdfed24afccbe6a28e61941232e1cd2efcbafd1f092ab148c0c1697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1b8499c0a2bf34333f40c474c394b71a76350a7fc194553cf807f2d5faa889c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd518b12329a228d3ba235314af632769596b1ca8a854f2caf622b9c3847816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8976fcbc73296c5af4cb1d7b4056d864b7d2cae6c8b19dc656ba85a228d2d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.037921 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.081542 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.093456 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.093524 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.093546 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.093571 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.093591 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:23Z","lastTransitionTime":"2025-12-08T19:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.123377 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.172569 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aabf1825-0c19-45de-9f9e-fe94777752e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k9whn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.196350 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.196479 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.196561 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.196589 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.196642 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:23Z","lastTransitionTime":"2025-12-08T19:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.299560 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.299681 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.299702 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.299728 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.299746 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:23Z","lastTransitionTime":"2025-12-08T19:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.402090 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.402144 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.402157 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.402179 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.402193 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:23Z","lastTransitionTime":"2025-12-08T19:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.504073 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.504108 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.504118 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.504133 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.504143 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:23Z","lastTransitionTime":"2025-12-08T19:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.606333 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.606367 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.606376 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.606389 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.606399 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:23Z","lastTransitionTime":"2025-12-08T19:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.708815 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.708897 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.708923 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.708953 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.708977 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:23Z","lastTransitionTime":"2025-12-08T19:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.766833 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:23 crc kubenswrapper[5125]: E1208 19:30:23.767088 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.776262 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjj2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05229a97-6cb6-4842-9ec3-f68831b2daf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdnq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjj2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.793743 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjgzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25c18b2-98b7-4c40-a059-08f4821dea99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjgzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.806761 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-txvvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afa3059b-1744-4855-ab93-3133529920d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptppk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-txvvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.811702 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.811814 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.811839 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.811868 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.811893 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:23Z","lastTransitionTime":"2025-12-08T19:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.829314 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a65da2-1f6c-4d8c-9235-319e35ed53e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://a5e4699670d62181c1fafae8281271f7dd7e3a3694a21aa85a0431dc61994c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d33cb163457c854b355765916b3c29d258a9b0db805a51c89bd221aba35fb12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8c37e3585615ba4ff1e0e7d348bf306b89181474b72aebe5290f9cf2a9c706d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1208 19:30:12.581927 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 19:30:12.582093 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 19:30:12.582975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1705152817/tls.crt::/tmp/serving-cert-1705152817/tls.key\\\\\\\"\\\\nI1208 19:30:13.192261 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 19:30:13.193899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 19:30:13.193911 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 19:30:13.193933 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 19:30:13.193938 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 19:30:13.196934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 19:30:13.196955 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 19:30:13.196960 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.196966 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.196970 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 19:30:13.196973 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 19:30:13.196975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 19:30:13.196978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 19:30:13.198675 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://be7cc8d52376599fa6e20ccc45f43544f765f5d0ca901360045e14c3441a4c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.847365 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.878144 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a16dd26-4f2d-422b-a3e7-459ca70d7925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://e9ed6b4f2152ebdc1484f71e24ba072cbf2b01f9d9feba86cfb7389754fdec5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://dffc632ffcdfed24afccbe6a28e61941232e1cd2efcbafd1f092ab148c0c1697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1b8499c0a2bf34333f40c474c394b71a76350a7fc194553cf807f2d5faa889c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd518b12329a228d3ba235314af632769596b1ca8a854f2caf622b9c3847816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8976fcbc73296c5af4cb1d7b4056d864b7d2cae6c8b19dc656ba85a228d2d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.893801 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.903967 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.914753 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.914806 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.914847 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.914865 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.914883 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:23Z","lastTransitionTime":"2025-12-08T19:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.918090 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.939115 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aabf1825-0c19-45de-9f9e-fe94777752e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k9whn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.952497 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8cea827-b8e3-4d92-adea-df0afd2397da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c9bz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c9bz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slhjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.964468 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9p7g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b938d768-ccce-45a6-a982-3f5d6f1a7d98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzwqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p7g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.974933 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2309c211-00a6-48e5-b99d-349b71a11862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caca8af5e19887a7e6708058ea051494b18a37f74e2c31cc984ee9e38f34a397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:23 crc kubenswrapper[5125]: I1208 19:30:23.992308 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7be318f-1e5a-4c9b-aff6-a0d7423fb520\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://51dd4ebaac488ab269d08cb3c6bd1ab70695582228b86f0ee98bcf2efe730911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d1a6ee7cc39cbce21b5d44e71db4af1388154261b0f4e46bf80a1c6aace1d18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6be3cefe94889f1e79893ae2e0cbc2c0e19b158c8b5d1fc78c2396198cdf1b63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b524051750cb775841e22d8cd5239926fb9dbb19325e7c8e9d0593caeab1da19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.007269 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.017845 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.017891 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.017906 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.017923 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.017940 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:24Z","lastTransitionTime":"2025-12-08T19:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.021085 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.034667 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7lwbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a677937-278d-4989-b196-40d5daba436d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8qzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8qzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7lwbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.043400 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d0e864-6620-4a75-baa4-8653836f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-w8mbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.056693 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd8c208-b235-420d-aa03-61fb487f40bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45dfdf1c59b5fb6c4c2329c90a050ab925412e0e70f48b865bbd4261ba6cf841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://df8ae2ed1ee6f83e167f23dd7edc5eaf5e881de6ea7d042f3d4184090b0cf6be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb9c33205053ee254860f931fb8051f331e26827a53bee03ec0451ad1c36124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.120222 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.120282 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.120300 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.120327 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.120384 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:24Z","lastTransitionTime":"2025-12-08T19:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.222585 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.222679 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.222699 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.222722 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.222740 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:24Z","lastTransitionTime":"2025-12-08T19:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.325170 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.325224 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.325236 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.325252 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.325265 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:24Z","lastTransitionTime":"2025-12-08T19:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.399191 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:24 crc kubenswrapper[5125]: E1208 19:30:24.399378 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:30:28.399344628 +0000 UTC m=+85.169834932 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.399462 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.399505 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.399534 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:24 crc kubenswrapper[5125]: E1208 19:30:24.399674 5125 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:24 crc kubenswrapper[5125]: E1208 19:30:24.399739 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:28.399723209 +0000 UTC m=+85.170213493 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:24 crc kubenswrapper[5125]: E1208 19:30:24.399876 5125 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:24 crc kubenswrapper[5125]: E1208 19:30:24.399986 5125 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:24 crc kubenswrapper[5125]: E1208 19:30:24.400033 5125 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:24 crc kubenswrapper[5125]: E1208 19:30:24.400053 5125 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:24 crc kubenswrapper[5125]: E1208 19:30:24.399993 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:28.399968145 +0000 UTC m=+85.170458459 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:24 crc kubenswrapper[5125]: E1208 19:30:24.400137 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:28.400118829 +0000 UTC m=+85.170609143 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.428399 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.428477 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.428493 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.428512 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.428527 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:24Z","lastTransitionTime":"2025-12-08T19:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.501159 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a677937-278d-4989-b196-40d5daba436d-metrics-certs\") pod \"network-metrics-daemon-7lwbz\" (UID: \"9a677937-278d-4989-b196-40d5daba436d\") " pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.501251 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:24 crc kubenswrapper[5125]: E1208 19:30:24.501348 5125 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:24 crc kubenswrapper[5125]: E1208 19:30:24.501456 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a677937-278d-4989-b196-40d5daba436d-metrics-certs podName:9a677937-278d-4989-b196-40d5daba436d nodeName:}" failed. No retries permitted until 2025-12-08 19:30:28.501433026 +0000 UTC m=+85.271923310 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9a677937-278d-4989-b196-40d5daba436d-metrics-certs") pod "network-metrics-daemon-7lwbz" (UID: "9a677937-278d-4989-b196-40d5daba436d") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:24 crc kubenswrapper[5125]: E1208 19:30:24.501496 5125 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:24 crc kubenswrapper[5125]: E1208 19:30:24.501524 5125 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:24 crc kubenswrapper[5125]: E1208 19:30:24.501543 5125 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:24 crc kubenswrapper[5125]: E1208 19:30:24.501670 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:28.501639311 +0000 UTC m=+85.272129645 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.531264 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.531320 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.531337 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.531362 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.531379 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:24Z","lastTransitionTime":"2025-12-08T19:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.633464 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.633532 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.633558 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.633587 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.633646 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:24Z","lastTransitionTime":"2025-12-08T19:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.736106 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.736167 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.736192 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.736222 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.736246 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:24Z","lastTransitionTime":"2025-12-08T19:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.767361 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:30:24 crc kubenswrapper[5125]: E1208 19:30:24.767542 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7lwbz" podUID="9a677937-278d-4989-b196-40d5daba436d" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.767575 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.767591 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:24 crc kubenswrapper[5125]: E1208 19:30:24.767833 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:24 crc kubenswrapper[5125]: E1208 19:30:24.767926 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.838969 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.839010 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.839020 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.839033 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.839044 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:24Z","lastTransitionTime":"2025-12-08T19:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.941514 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.941563 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.941575 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.941591 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:24 crc kubenswrapper[5125]: I1208 19:30:24.941604 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:24Z","lastTransitionTime":"2025-12-08T19:30:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.044599 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.044706 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.044730 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.044758 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.044777 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:25Z","lastTransitionTime":"2025-12-08T19:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.147667 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.147739 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.147764 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.147801 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.147824 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:25Z","lastTransitionTime":"2025-12-08T19:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.250379 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.250473 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.250502 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.250535 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.250560 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:25Z","lastTransitionTime":"2025-12-08T19:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.353490 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.353571 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.353658 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.353699 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.353721 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:25Z","lastTransitionTime":"2025-12-08T19:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.456689 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.456770 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.456797 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.456828 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.456851 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:25Z","lastTransitionTime":"2025-12-08T19:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.559659 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.559738 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.559761 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.559788 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.559808 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:25Z","lastTransitionTime":"2025-12-08T19:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.663003 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.663090 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.663118 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.663153 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.663177 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:25Z","lastTransitionTime":"2025-12-08T19:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.765821 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.765915 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.765934 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.765960 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.765978 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:25Z","lastTransitionTime":"2025-12-08T19:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.766726 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:25 crc kubenswrapper[5125]: E1208 19:30:25.766886 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.868733 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.868818 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.868846 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.868876 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.868899 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:25Z","lastTransitionTime":"2025-12-08T19:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.971996 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.972100 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.972146 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.972182 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:25 crc kubenswrapper[5125]: I1208 19:30:25.972204 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:25Z","lastTransitionTime":"2025-12-08T19:30:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.075973 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.076039 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.076052 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.076071 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.076084 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:26Z","lastTransitionTime":"2025-12-08T19:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.179198 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.179286 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.179307 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.179339 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.179364 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:26Z","lastTransitionTime":"2025-12-08T19:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.283373 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.283501 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.283515 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.283538 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.283559 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:26Z","lastTransitionTime":"2025-12-08T19:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.386381 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.386540 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.386565 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.386600 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.386668 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:26Z","lastTransitionTime":"2025-12-08T19:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.489771 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.489838 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.489856 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.489885 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.489911 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:26Z","lastTransitionTime":"2025-12-08T19:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.592690 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.592766 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.592786 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.592816 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.592837 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:26Z","lastTransitionTime":"2025-12-08T19:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.696431 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.696488 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.696500 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.696525 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.696535 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:26Z","lastTransitionTime":"2025-12-08T19:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.766836 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.766839 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:30:26 crc kubenswrapper[5125]: E1208 19:30:26.767025 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7lwbz" podUID="9a677937-278d-4989-b196-40d5daba436d" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.767068 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:26 crc kubenswrapper[5125]: E1208 19:30:26.767145 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:26 crc kubenswrapper[5125]: E1208 19:30:26.767353 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.799080 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.799126 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.799137 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.799153 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.799163 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:26Z","lastTransitionTime":"2025-12-08T19:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.901266 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.901335 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.901354 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.901380 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:26 crc kubenswrapper[5125]: I1208 19:30:26.901397 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:26Z","lastTransitionTime":"2025-12-08T19:30:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.003945 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.004025 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.004052 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.004091 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.004117 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:27Z","lastTransitionTime":"2025-12-08T19:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.106476 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.106535 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.106550 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.106570 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.106586 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:27Z","lastTransitionTime":"2025-12-08T19:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.208507 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.208596 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.208638 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.208664 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.208682 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:27Z","lastTransitionTime":"2025-12-08T19:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.312093 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.312144 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.312153 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.312171 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.312185 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:27Z","lastTransitionTime":"2025-12-08T19:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.415225 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.415285 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.415297 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.415315 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.415329 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:27Z","lastTransitionTime":"2025-12-08T19:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.517788 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.517876 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.517896 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.517922 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.517940 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:27Z","lastTransitionTime":"2025-12-08T19:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.620775 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.620859 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.620883 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.620908 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.620927 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:27Z","lastTransitionTime":"2025-12-08T19:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.724234 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.724390 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.724421 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.724450 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.724469 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:27Z","lastTransitionTime":"2025-12-08T19:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.767106 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:27 crc kubenswrapper[5125]: E1208 19:30:27.767333 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.826525 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.826587 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.826659 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.826693 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.826719 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:27Z","lastTransitionTime":"2025-12-08T19:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.929720 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.929775 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.929791 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.929810 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:27 crc kubenswrapper[5125]: I1208 19:30:27.929823 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:27Z","lastTransitionTime":"2025-12-08T19:30:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.032469 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.032537 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.032557 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.032581 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.032600 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:28Z","lastTransitionTime":"2025-12-08T19:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.134510 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.134695 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.134717 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.134738 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.134754 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:28Z","lastTransitionTime":"2025-12-08T19:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.237559 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.237666 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.237682 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.237704 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.237719 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:28Z","lastTransitionTime":"2025-12-08T19:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.340564 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.340636 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.340648 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.340664 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.340674 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:28Z","lastTransitionTime":"2025-12-08T19:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.442699 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.442741 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.442752 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.442766 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.442776 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:28Z","lastTransitionTime":"2025-12-08T19:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.449508 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:28 crc kubenswrapper[5125]: E1208 19:30:28.449796 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:30:36.44972797 +0000 UTC m=+93.220218284 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.449930 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.449995 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.450132 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:28 crc kubenswrapper[5125]: E1208 19:30:28.450133 5125 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:28 crc kubenswrapper[5125]: E1208 19:30:28.450275 5125 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:28 crc kubenswrapper[5125]: E1208 19:30:28.450352 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:36.450336637 +0000 UTC m=+93.220826951 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:28 crc kubenswrapper[5125]: E1208 19:30:28.450377 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:36.450365538 +0000 UTC m=+93.220855852 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:28 crc kubenswrapper[5125]: E1208 19:30:28.450416 5125 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:28 crc kubenswrapper[5125]: E1208 19:30:28.450466 5125 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:28 crc kubenswrapper[5125]: E1208 19:30:28.450494 5125 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:28 crc kubenswrapper[5125]: E1208 19:30:28.450662 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:36.450585684 +0000 UTC m=+93.221075998 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.551598 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a677937-278d-4989-b196-40d5daba436d-metrics-certs\") pod \"network-metrics-daemon-7lwbz\" (UID: \"9a677937-278d-4989-b196-40d5daba436d\") " pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.551677 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:28 crc kubenswrapper[5125]: E1208 19:30:28.551801 5125 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:28 crc kubenswrapper[5125]: E1208 19:30:28.551818 5125 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:28 crc kubenswrapper[5125]: E1208 19:30:28.551830 5125 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:28 crc kubenswrapper[5125]: E1208 19:30:28.551838 5125 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:28 crc kubenswrapper[5125]: E1208 19:30:28.551889 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:36.55187207 +0000 UTC m=+93.322362354 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.551850 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:28 crc kubenswrapper[5125]: E1208 19:30:28.551960 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a677937-278d-4989-b196-40d5daba436d-metrics-certs podName:9a677937-278d-4989-b196-40d5daba436d nodeName:}" failed. No retries permitted until 2025-12-08 19:30:36.551922561 +0000 UTC m=+93.322412875 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9a677937-278d-4989-b196-40d5daba436d-metrics-certs") pod "network-metrics-daemon-7lwbz" (UID: "9a677937-278d-4989-b196-40d5daba436d") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.551997 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.552023 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.552046 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.552061 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:28Z","lastTransitionTime":"2025-12-08T19:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.654056 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.654103 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.654146 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.654163 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.654175 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:28Z","lastTransitionTime":"2025-12-08T19:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.755999 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.756062 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.756082 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.756109 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.756127 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:28Z","lastTransitionTime":"2025-12-08T19:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.767357 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.767365 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:28 crc kubenswrapper[5125]: E1208 19:30:28.767521 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.767368 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:30:28 crc kubenswrapper[5125]: E1208 19:30:28.767678 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:28 crc kubenswrapper[5125]: E1208 19:30:28.767818 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7lwbz" podUID="9a677937-278d-4989-b196-40d5daba436d" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.858213 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.858299 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.858319 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.858347 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.858367 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:28Z","lastTransitionTime":"2025-12-08T19:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.960875 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.960919 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.960928 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.960943 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:28 crc kubenswrapper[5125]: I1208 19:30:28.960959 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:28Z","lastTransitionTime":"2025-12-08T19:30:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.063711 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.063790 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.063818 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.063850 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.063873 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:29Z","lastTransitionTime":"2025-12-08T19:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.166819 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.166876 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.166889 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.166911 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.166924 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:29Z","lastTransitionTime":"2025-12-08T19:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.269579 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.269700 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.269728 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.269753 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.269773 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:29Z","lastTransitionTime":"2025-12-08T19:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.372527 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.372647 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.372690 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.372726 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.372750 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:29Z","lastTransitionTime":"2025-12-08T19:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.475051 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.475100 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.475164 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.475194 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.475269 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:29Z","lastTransitionTime":"2025-12-08T19:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.577809 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.578011 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.578080 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.578112 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.578167 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:29Z","lastTransitionTime":"2025-12-08T19:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.681079 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.681161 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.681202 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.681234 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.681258 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:29Z","lastTransitionTime":"2025-12-08T19:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.767429 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:29 crc kubenswrapper[5125]: E1208 19:30:29.767685 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.783893 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.783952 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.783972 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.783994 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.784011 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:29Z","lastTransitionTime":"2025-12-08T19:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.886325 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.886396 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.886419 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.886440 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.886458 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:29Z","lastTransitionTime":"2025-12-08T19:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.989441 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.989525 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.989544 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.989569 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:29 crc kubenswrapper[5125]: I1208 19:30:29.989590 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:29Z","lastTransitionTime":"2025-12-08T19:30:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.093210 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.093283 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.093302 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.093329 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.093342 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:30Z","lastTransitionTime":"2025-12-08T19:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.195424 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.195506 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.195532 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.195563 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.195585 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:30Z","lastTransitionTime":"2025-12-08T19:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.297549 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.297598 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.297632 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.297650 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.297663 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:30Z","lastTransitionTime":"2025-12-08T19:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.400091 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.400167 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.400187 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.400213 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.400233 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:30Z","lastTransitionTime":"2025-12-08T19:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.502659 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.502748 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.502769 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.502794 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.502811 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:30Z","lastTransitionTime":"2025-12-08T19:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.605231 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.605280 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.605293 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.605311 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.605325 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:30Z","lastTransitionTime":"2025-12-08T19:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.708346 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.708414 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.708435 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.708460 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.708477 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:30Z","lastTransitionTime":"2025-12-08T19:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.767244 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.767293 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.767317 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:30 crc kubenswrapper[5125]: E1208 19:30:30.767437 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7lwbz" podUID="9a677937-278d-4989-b196-40d5daba436d" Dec 08 19:30:30 crc kubenswrapper[5125]: E1208 19:30:30.767565 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:30 crc kubenswrapper[5125]: E1208 19:30:30.767775 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.810568 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.810642 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.810658 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.810674 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.810687 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:30Z","lastTransitionTime":"2025-12-08T19:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.912795 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.912845 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.912857 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.912874 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:30 crc kubenswrapper[5125]: I1208 19:30:30.912889 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:30Z","lastTransitionTime":"2025-12-08T19:30:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.015174 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.015240 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.015258 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.015287 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.015313 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:31Z","lastTransitionTime":"2025-12-08T19:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.117796 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.117848 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.117863 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.117880 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.117892 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:31Z","lastTransitionTime":"2025-12-08T19:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.220574 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.220649 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.220662 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.220679 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.220694 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:31Z","lastTransitionTime":"2025-12-08T19:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.323108 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.323155 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.323166 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.323181 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.323191 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:31Z","lastTransitionTime":"2025-12-08T19:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.426054 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.426138 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.426158 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.426181 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.426203 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:31Z","lastTransitionTime":"2025-12-08T19:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.528999 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.529077 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.529103 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.529133 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.529155 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:31Z","lastTransitionTime":"2025-12-08T19:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.632266 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.632334 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.632351 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.632373 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.632392 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:31Z","lastTransitionTime":"2025-12-08T19:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.735407 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.735469 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.735480 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.735498 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.735510 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:31Z","lastTransitionTime":"2025-12-08T19:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.767561 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:31 crc kubenswrapper[5125]: E1208 19:30:31.767890 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.816646 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.816709 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.816718 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.816734 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.816743 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:31Z","lastTransitionTime":"2025-12-08T19:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:31 crc kubenswrapper[5125]: E1208 19:30:31.831263 5125 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc970274-9f45-4e00-af2e-908ff2f74194\\\",\\\"systemUUID\\\":\\\"3204b44a-5260-4c04-b0d1-92575bcb7d69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.835594 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.835719 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.835747 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.835772 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.835790 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:31Z","lastTransitionTime":"2025-12-08T19:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:31 crc kubenswrapper[5125]: E1208 19:30:31.848821 5125 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc970274-9f45-4e00-af2e-908ff2f74194\\\",\\\"systemUUID\\\":\\\"3204b44a-5260-4c04-b0d1-92575bcb7d69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.853911 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.853951 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.853960 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.853974 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.853984 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:31Z","lastTransitionTime":"2025-12-08T19:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:31 crc kubenswrapper[5125]: E1208 19:30:31.868000 5125 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc970274-9f45-4e00-af2e-908ff2f74194\\\",\\\"systemUUID\\\":\\\"3204b44a-5260-4c04-b0d1-92575bcb7d69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.871409 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.871490 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.871510 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.871535 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.871553 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:31Z","lastTransitionTime":"2025-12-08T19:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:31 crc kubenswrapper[5125]: E1208 19:30:31.882645 5125 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc970274-9f45-4e00-af2e-908ff2f74194\\\",\\\"systemUUID\\\":\\\"3204b44a-5260-4c04-b0d1-92575bcb7d69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.885903 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.885949 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.885963 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.885982 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.885994 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:31Z","lastTransitionTime":"2025-12-08T19:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:31 crc kubenswrapper[5125]: E1208 19:30:31.899177 5125 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc970274-9f45-4e00-af2e-908ff2f74194\\\",\\\"systemUUID\\\":\\\"3204b44a-5260-4c04-b0d1-92575bcb7d69\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:31 crc kubenswrapper[5125]: E1208 19:30:31.899433 5125 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.900517 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.900567 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.900580 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.900597 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:31 crc kubenswrapper[5125]: I1208 19:30:31.900634 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:31Z","lastTransitionTime":"2025-12-08T19:30:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.003319 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.003365 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.003377 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.003391 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.003403 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:32Z","lastTransitionTime":"2025-12-08T19:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.105354 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.105402 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.105411 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.105423 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.105431 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:32Z","lastTransitionTime":"2025-12-08T19:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.219714 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.219777 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.219795 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.219817 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.219833 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:32Z","lastTransitionTime":"2025-12-08T19:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.321778 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.321871 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.321886 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.321903 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.321914 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:32Z","lastTransitionTime":"2025-12-08T19:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.424076 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.424175 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.424213 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.424243 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.424264 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:32Z","lastTransitionTime":"2025-12-08T19:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.527030 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.527102 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.527124 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.527148 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.527167 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:32Z","lastTransitionTime":"2025-12-08T19:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.629778 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.629888 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.629909 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.629934 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.629960 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:32Z","lastTransitionTime":"2025-12-08T19:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.732035 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.732103 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.732119 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.732140 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.732159 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:32Z","lastTransitionTime":"2025-12-08T19:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.766875 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:32 crc kubenswrapper[5125]: E1208 19:30:32.767153 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.767303 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:30:32 crc kubenswrapper[5125]: E1208 19:30:32.767399 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7lwbz" podUID="9a677937-278d-4989-b196-40d5daba436d" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.767492 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:32 crc kubenswrapper[5125]: E1208 19:30:32.768726 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:32 crc kubenswrapper[5125]: E1208 19:30:32.769269 5125 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 19:30:32 crc kubenswrapper[5125]: E1208 19:30:32.769558 5125 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4c9bz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-slhjr_openshift-machine-config-operator(d8cea827-b8e3-4d92-adea-df0afd2397da): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 19:30:32 crc kubenswrapper[5125]: E1208 19:30:32.769581 5125 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:32 crc kubenswrapper[5125]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 08 19:30:32 crc kubenswrapper[5125]: while [ true ]; Dec 08 19:30:32 crc kubenswrapper[5125]: do Dec 08 19:30:32 crc kubenswrapper[5125]: for f in $(ls /tmp/serviceca); do Dec 08 19:30:32 crc kubenswrapper[5125]: echo $f Dec 08 19:30:32 crc kubenswrapper[5125]: ca_file_path="/tmp/serviceca/${f}" Dec 08 19:30:32 crc kubenswrapper[5125]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 08 19:30:32 crc kubenswrapper[5125]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 08 19:30:32 crc kubenswrapper[5125]: if [ -e "${reg_dir_path}" ]; then Dec 08 19:30:32 crc kubenswrapper[5125]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 08 19:30:32 crc kubenswrapper[5125]: else Dec 08 19:30:32 crc kubenswrapper[5125]: mkdir $reg_dir_path Dec 08 19:30:32 crc kubenswrapper[5125]: cp $ca_file_path $reg_dir_path/ca.crt Dec 08 19:30:32 crc kubenswrapper[5125]: fi Dec 08 19:30:32 crc kubenswrapper[5125]: done Dec 08 19:30:32 crc kubenswrapper[5125]: for d in $(ls /etc/docker/certs.d); do Dec 08 19:30:32 crc kubenswrapper[5125]: echo $d Dec 08 19:30:32 crc kubenswrapper[5125]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 08 19:30:32 crc kubenswrapper[5125]: reg_conf_path="/tmp/serviceca/${dp}" Dec 08 19:30:32 crc kubenswrapper[5125]: if [ ! -e "${reg_conf_path}" ]; then Dec 08 19:30:32 crc kubenswrapper[5125]: rm -rf /etc/docker/certs.d/$d Dec 08 19:30:32 crc kubenswrapper[5125]: fi Dec 08 19:30:32 crc kubenswrapper[5125]: done Dec 08 19:30:32 crc kubenswrapper[5125]: sleep 60 & wait ${!} Dec 08 19:30:32 crc kubenswrapper[5125]: done Dec 08 19:30:32 crc kubenswrapper[5125]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jdnq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-jjj2h_openshift-image-registry(05229a97-6cb6-4842-9ec3-f68831b2daf5): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:32 crc kubenswrapper[5125]: > logger="UnhandledError" Dec 08 19:30:32 crc kubenswrapper[5125]: E1208 19:30:32.770810 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Dec 08 19:30:32 crc kubenswrapper[5125]: E1208 19:30:32.770826 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-jjj2h" podUID="05229a97-6cb6-4842-9ec3-f68831b2daf5" Dec 08 19:30:32 crc kubenswrapper[5125]: E1208 19:30:32.773177 5125 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4c9bz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-slhjr_openshift-machine-config-operator(d8cea827-b8e3-4d92-adea-df0afd2397da): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 19:30:32 crc kubenswrapper[5125]: E1208 19:30:32.774460 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" podUID="d8cea827-b8e3-4d92-adea-df0afd2397da" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.834776 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.834830 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.834842 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.834862 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.834876 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:32Z","lastTransitionTime":"2025-12-08T19:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.937200 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.937280 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.937307 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.937336 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:32 crc kubenswrapper[5125]: I1208 19:30:32.937357 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:32Z","lastTransitionTime":"2025-12-08T19:30:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.039386 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.039448 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.039473 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.039495 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.039510 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:33Z","lastTransitionTime":"2025-12-08T19:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.141728 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.142013 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.142038 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.142069 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.142094 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:33Z","lastTransitionTime":"2025-12-08T19:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.244128 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.244175 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.244194 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.244211 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.244221 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:33Z","lastTransitionTime":"2025-12-08T19:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.346359 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.346677 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.346772 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.346869 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.346959 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:33Z","lastTransitionTime":"2025-12-08T19:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.448961 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.449248 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.449410 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.449588 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.449782 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:33Z","lastTransitionTime":"2025-12-08T19:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.551932 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.551994 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.552014 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.552037 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.552053 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:33Z","lastTransitionTime":"2025-12-08T19:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.654298 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.654437 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.654505 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.654530 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.654583 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:33Z","lastTransitionTime":"2025-12-08T19:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.756802 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.756884 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.756911 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.756943 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.756965 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:33Z","lastTransitionTime":"2025-12-08T19:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.767886 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:33 crc kubenswrapper[5125]: E1208 19:30:33.767995 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:33 crc kubenswrapper[5125]: E1208 19:30:33.770075 5125 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:33 crc kubenswrapper[5125]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Dec 08 19:30:33 crc kubenswrapper[5125]: set -o allexport Dec 08 19:30:33 crc kubenswrapper[5125]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 08 19:30:33 crc kubenswrapper[5125]: source /etc/kubernetes/apiserver-url.env Dec 08 19:30:33 crc kubenswrapper[5125]: else Dec 08 19:30:33 crc kubenswrapper[5125]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 08 19:30:33 crc kubenswrapper[5125]: exit 1 Dec 08 19:30:33 crc kubenswrapper[5125]: fi Dec 08 19:30:33 crc kubenswrapper[5125]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 08 19:30:33 crc kubenswrapper[5125]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:33 crc kubenswrapper[5125]: > logger="UnhandledError" Dec 08 19:30:33 crc kubenswrapper[5125]: E1208 19:30:33.771405 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Dec 08 19:30:33 crc kubenswrapper[5125]: E1208 19:30:33.771483 5125 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:33 crc kubenswrapper[5125]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 08 19:30:33 crc kubenswrapper[5125]: apiVersion: v1 Dec 08 19:30:33 crc kubenswrapper[5125]: clusters: Dec 08 19:30:33 crc kubenswrapper[5125]: - cluster: Dec 08 19:30:33 crc kubenswrapper[5125]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 08 19:30:33 crc kubenswrapper[5125]: server: https://api-int.crc.testing:6443 Dec 08 19:30:33 crc kubenswrapper[5125]: name: default-cluster Dec 08 19:30:33 crc kubenswrapper[5125]: contexts: Dec 08 19:30:33 crc kubenswrapper[5125]: - context: Dec 08 19:30:33 crc kubenswrapper[5125]: cluster: default-cluster Dec 08 19:30:33 crc kubenswrapper[5125]: namespace: default Dec 08 19:30:33 crc kubenswrapper[5125]: user: default-auth Dec 08 19:30:33 crc kubenswrapper[5125]: name: default-context Dec 08 19:30:33 crc kubenswrapper[5125]: current-context: default-context Dec 08 19:30:33 crc kubenswrapper[5125]: kind: Config Dec 08 19:30:33 crc kubenswrapper[5125]: preferences: {} Dec 08 19:30:33 crc kubenswrapper[5125]: users: Dec 08 19:30:33 crc kubenswrapper[5125]: - name: default-auth Dec 08 19:30:33 crc kubenswrapper[5125]: user: Dec 08 19:30:33 crc kubenswrapper[5125]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 19:30:33 crc kubenswrapper[5125]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 19:30:33 crc kubenswrapper[5125]: EOF Dec 08 19:30:33 crc kubenswrapper[5125]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-42xvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-k9whn_openshift-ovn-kubernetes(aabf1825-0c19-45de-9f9e-fe94777752e6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:33 crc kubenswrapper[5125]: > logger="UnhandledError" Dec 08 19:30:33 crc kubenswrapper[5125]: E1208 19:30:33.771555 5125 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:33 crc kubenswrapper[5125]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 08 19:30:33 crc kubenswrapper[5125]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 08 19:30:33 crc kubenswrapper[5125]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nzwqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-9p7g8_openshift-multus(b938d768-ccce-45a6-a982-3f5d6f1a7d98): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:33 crc kubenswrapper[5125]: > logger="UnhandledError" Dec 08 19:30:33 crc kubenswrapper[5125]: E1208 19:30:33.772431 5125 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rmsnc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-rjgzs_openshift-multus(e25c18b2-98b7-4c40-a059-08f4821dea99): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 19:30:33 crc kubenswrapper[5125]: E1208 19:30:33.772582 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" Dec 08 19:30:33 crc kubenswrapper[5125]: E1208 19:30:33.772631 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-9p7g8" podUID="b938d768-ccce-45a6-a982-3f5d6f1a7d98" Dec 08 19:30:33 crc kubenswrapper[5125]: E1208 19:30:33.773642 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-rjgzs" podUID="e25c18b2-98b7-4c40-a059-08f4821dea99" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.788670 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a16dd26-4f2d-422b-a3e7-459ca70d7925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://e9ed6b4f2152ebdc1484f71e24ba072cbf2b01f9d9feba86cfb7389754fdec5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://dffc632ffcdfed24afccbe6a28e61941232e1cd2efcbafd1f092ab148c0c1697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1b8499c0a2bf34333f40c474c394b71a76350a7fc194553cf807f2d5faa889c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd518b12329a228d3ba235314af632769596b1ca8a854f2caf622b9c3847816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8976fcbc73296c5af4cb1d7b4056d864b7d2cae6c8b19dc656ba85a228d2d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.799084 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.808454 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.821042 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.838693 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aabf1825-0c19-45de-9f9e-fe94777752e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k9whn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.851994 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8cea827-b8e3-4d92-adea-df0afd2397da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c9bz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c9bz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slhjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.859369 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.859422 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.859436 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.859459 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.859472 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:33Z","lastTransitionTime":"2025-12-08T19:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.867242 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9p7g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b938d768-ccce-45a6-a982-3f5d6f1a7d98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzwqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p7g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.879288 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2309c211-00a6-48e5-b99d-349b71a11862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caca8af5e19887a7e6708058ea051494b18a37f74e2c31cc984ee9e38f34a397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.898875 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7be318f-1e5a-4c9b-aff6-a0d7423fb520\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://51dd4ebaac488ab269d08cb3c6bd1ab70695582228b86f0ee98bcf2efe730911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d1a6ee7cc39cbce21b5d44e71db4af1388154261b0f4e46bf80a1c6aace1d18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6be3cefe94889f1e79893ae2e0cbc2c0e19b158c8b5d1fc78c2396198cdf1b63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b524051750cb775841e22d8cd5239926fb9dbb19325e7c8e9d0593caeab1da19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.914978 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.929881 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.937859 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7lwbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a677937-278d-4989-b196-40d5daba436d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8qzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8qzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7lwbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.946756 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d0e864-6620-4a75-baa4-8653836f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-w8mbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.955035 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd8c208-b235-420d-aa03-61fb487f40bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45dfdf1c59b5fb6c4c2329c90a050ab925412e0e70f48b865bbd4261ba6cf841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://df8ae2ed1ee6f83e167f23dd7edc5eaf5e881de6ea7d042f3d4184090b0cf6be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb9c33205053ee254860f931fb8051f331e26827a53bee03ec0451ad1c36124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.961316 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.961428 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.961447 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.961474 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.961492 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:33Z","lastTransitionTime":"2025-12-08T19:30:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.964860 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjj2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05229a97-6cb6-4842-9ec3-f68831b2daf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdnq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjj2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.976570 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjgzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25c18b2-98b7-4c40-a059-08f4821dea99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjgzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:33 crc kubenswrapper[5125]: I1208 19:30:33.987494 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-txvvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afa3059b-1744-4855-ab93-3133529920d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptppk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-txvvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.000747 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a65da2-1f6c-4d8c-9235-319e35ed53e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://a5e4699670d62181c1fafae8281271f7dd7e3a3694a21aa85a0431dc61994c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d33cb163457c854b355765916b3c29d258a9b0db805a51c89bd221aba35fb12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8c37e3585615ba4ff1e0e7d348bf306b89181474b72aebe5290f9cf2a9c706d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1208 19:30:12.581927 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 19:30:12.582093 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 19:30:12.582975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1705152817/tls.crt::/tmp/serving-cert-1705152817/tls.key\\\\\\\"\\\\nI1208 19:30:13.192261 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 19:30:13.193899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 19:30:13.193911 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 19:30:13.193933 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 19:30:13.193938 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 19:30:13.196934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 19:30:13.196955 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 19:30:13.196960 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.196966 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.196970 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 19:30:13.196973 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 19:30:13.196975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 19:30:13.196978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 19:30:13.198675 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://be7cc8d52376599fa6e20ccc45f43544f765f5d0ca901360045e14c3441a4c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.014461 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.063687 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.063767 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.063791 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.063823 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.063845 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:34Z","lastTransitionTime":"2025-12-08T19:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.166500 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.166557 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.166574 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.166592 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.166604 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:34Z","lastTransitionTime":"2025-12-08T19:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.268340 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.268407 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.268426 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.268453 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.268495 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:34Z","lastTransitionTime":"2025-12-08T19:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.370857 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.370963 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.370988 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.371018 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.371079 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:34Z","lastTransitionTime":"2025-12-08T19:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.473704 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.473823 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.474132 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.474190 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.474209 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:34Z","lastTransitionTime":"2025-12-08T19:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.502886 5125 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.576964 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.577081 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.577108 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.577138 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.577177 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:34Z","lastTransitionTime":"2025-12-08T19:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.679595 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.679690 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.679715 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.679745 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.679768 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:34Z","lastTransitionTime":"2025-12-08T19:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.767528 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.767758 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.767925 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:34 crc kubenswrapper[5125]: E1208 19:30:34.767931 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7lwbz" podUID="9a677937-278d-4989-b196-40d5daba436d" Dec 08 19:30:34 crc kubenswrapper[5125]: E1208 19:30:34.768058 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:34 crc kubenswrapper[5125]: E1208 19:30:34.768113 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:34 crc kubenswrapper[5125]: E1208 19:30:34.770288 5125 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:34 crc kubenswrapper[5125]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 19:30:34 crc kubenswrapper[5125]: if [[ -f "/env/_master" ]]; then Dec 08 19:30:34 crc kubenswrapper[5125]: set -o allexport Dec 08 19:30:34 crc kubenswrapper[5125]: source "/env/_master" Dec 08 19:30:34 crc kubenswrapper[5125]: set +o allexport Dec 08 19:30:34 crc kubenswrapper[5125]: fi Dec 08 19:30:34 crc kubenswrapper[5125]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 08 19:30:34 crc kubenswrapper[5125]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 08 19:30:34 crc kubenswrapper[5125]: ho_enable="--enable-hybrid-overlay" Dec 08 19:30:34 crc kubenswrapper[5125]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 08 19:30:34 crc kubenswrapper[5125]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 08 19:30:34 crc kubenswrapper[5125]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 08 19:30:34 crc kubenswrapper[5125]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 08 19:30:34 crc kubenswrapper[5125]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 08 19:30:34 crc kubenswrapper[5125]: --webhook-host=127.0.0.1 \ Dec 08 19:30:34 crc kubenswrapper[5125]: --webhook-port=9743 \ Dec 08 19:30:34 crc kubenswrapper[5125]: ${ho_enable} \ Dec 08 19:30:34 crc kubenswrapper[5125]: --enable-interconnect \ Dec 08 19:30:34 crc kubenswrapper[5125]: --disable-approver \ Dec 08 19:30:34 crc kubenswrapper[5125]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 08 19:30:34 crc kubenswrapper[5125]: --wait-for-kubernetes-api=200s \ Dec 08 19:30:34 crc kubenswrapper[5125]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 08 19:30:34 crc kubenswrapper[5125]: --loglevel="${LOGLEVEL}" Dec 08 19:30:34 crc kubenswrapper[5125]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:34 crc kubenswrapper[5125]: > logger="UnhandledError" Dec 08 19:30:34 crc kubenswrapper[5125]: E1208 19:30:34.771049 5125 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:34 crc kubenswrapper[5125]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Dec 08 19:30:34 crc kubenswrapper[5125]: set -uo pipefail Dec 08 19:30:34 crc kubenswrapper[5125]: Dec 08 19:30:34 crc kubenswrapper[5125]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 08 19:30:34 crc kubenswrapper[5125]: Dec 08 19:30:34 crc kubenswrapper[5125]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 08 19:30:34 crc kubenswrapper[5125]: HOSTS_FILE="/etc/hosts" Dec 08 19:30:34 crc kubenswrapper[5125]: TEMP_FILE="/tmp/hosts.tmp" Dec 08 19:30:34 crc kubenswrapper[5125]: Dec 08 19:30:34 crc kubenswrapper[5125]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 08 19:30:34 crc kubenswrapper[5125]: Dec 08 19:30:34 crc kubenswrapper[5125]: # Make a temporary file with the old hosts file's attributes. Dec 08 19:30:34 crc kubenswrapper[5125]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 08 19:30:34 crc kubenswrapper[5125]: echo "Failed to preserve hosts file. Exiting." Dec 08 19:30:34 crc kubenswrapper[5125]: exit 1 Dec 08 19:30:34 crc kubenswrapper[5125]: fi Dec 08 19:30:34 crc kubenswrapper[5125]: Dec 08 19:30:34 crc kubenswrapper[5125]: while true; do Dec 08 19:30:34 crc kubenswrapper[5125]: declare -A svc_ips Dec 08 19:30:34 crc kubenswrapper[5125]: for svc in "${services[@]}"; do Dec 08 19:30:34 crc kubenswrapper[5125]: # Fetch service IP from cluster dns if present. We make several tries Dec 08 19:30:34 crc kubenswrapper[5125]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 08 19:30:34 crc kubenswrapper[5125]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 08 19:30:34 crc kubenswrapper[5125]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 08 19:30:34 crc kubenswrapper[5125]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 19:30:34 crc kubenswrapper[5125]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 19:30:34 crc kubenswrapper[5125]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 19:30:34 crc kubenswrapper[5125]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 08 19:30:34 crc kubenswrapper[5125]: for i in ${!cmds[*]} Dec 08 19:30:34 crc kubenswrapper[5125]: do Dec 08 19:30:34 crc kubenswrapper[5125]: ips=($(eval "${cmds[i]}")) Dec 08 19:30:34 crc kubenswrapper[5125]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 08 19:30:34 crc kubenswrapper[5125]: svc_ips["${svc}"]="${ips[@]}" Dec 08 19:30:34 crc kubenswrapper[5125]: break Dec 08 19:30:34 crc kubenswrapper[5125]: fi Dec 08 19:30:34 crc kubenswrapper[5125]: done Dec 08 19:30:34 crc kubenswrapper[5125]: done Dec 08 19:30:34 crc kubenswrapper[5125]: Dec 08 19:30:34 crc kubenswrapper[5125]: # Update /etc/hosts only if we get valid service IPs Dec 08 19:30:34 crc kubenswrapper[5125]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 08 19:30:34 crc kubenswrapper[5125]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 08 19:30:34 crc kubenswrapper[5125]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 08 19:30:34 crc kubenswrapper[5125]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 08 19:30:34 crc kubenswrapper[5125]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 08 19:30:34 crc kubenswrapper[5125]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 08 19:30:34 crc kubenswrapper[5125]: sleep 60 & wait Dec 08 19:30:34 crc kubenswrapper[5125]: continue Dec 08 19:30:34 crc kubenswrapper[5125]: fi Dec 08 19:30:34 crc kubenswrapper[5125]: Dec 08 19:30:34 crc kubenswrapper[5125]: # Append resolver entries for services Dec 08 19:30:34 crc kubenswrapper[5125]: rc=0 Dec 08 19:30:34 crc kubenswrapper[5125]: for svc in "${!svc_ips[@]}"; do Dec 08 19:30:34 crc kubenswrapper[5125]: for ip in ${svc_ips[${svc}]}; do Dec 08 19:30:34 crc kubenswrapper[5125]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 08 19:30:34 crc kubenswrapper[5125]: done Dec 08 19:30:34 crc kubenswrapper[5125]: done Dec 08 19:30:34 crc kubenswrapper[5125]: if [[ $rc -ne 0 ]]; then Dec 08 19:30:34 crc kubenswrapper[5125]: sleep 60 & wait Dec 08 19:30:34 crc kubenswrapper[5125]: continue Dec 08 19:30:34 crc kubenswrapper[5125]: fi Dec 08 19:30:34 crc kubenswrapper[5125]: Dec 08 19:30:34 crc kubenswrapper[5125]: Dec 08 19:30:34 crc kubenswrapper[5125]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 08 19:30:34 crc kubenswrapper[5125]: # Replace /etc/hosts with our modified version if needed Dec 08 19:30:34 crc kubenswrapper[5125]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 08 19:30:34 crc kubenswrapper[5125]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 08 19:30:34 crc kubenswrapper[5125]: fi Dec 08 19:30:34 crc kubenswrapper[5125]: sleep 60 & wait Dec 08 19:30:34 crc kubenswrapper[5125]: unset svc_ips Dec 08 19:30:34 crc kubenswrapper[5125]: done Dec 08 19:30:34 crc kubenswrapper[5125]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ptppk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-txvvl_openshift-dns(afa3059b-1744-4855-ab93-3133529920d5): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:34 crc kubenswrapper[5125]: > logger="UnhandledError" Dec 08 19:30:34 crc kubenswrapper[5125]: E1208 19:30:34.772433 5125 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:34 crc kubenswrapper[5125]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 19:30:34 crc kubenswrapper[5125]: if [[ -f "/env/_master" ]]; then Dec 08 19:30:34 crc kubenswrapper[5125]: set -o allexport Dec 08 19:30:34 crc kubenswrapper[5125]: source "/env/_master" Dec 08 19:30:34 crc kubenswrapper[5125]: set +o allexport Dec 08 19:30:34 crc kubenswrapper[5125]: fi Dec 08 19:30:34 crc kubenswrapper[5125]: Dec 08 19:30:34 crc kubenswrapper[5125]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 08 19:30:34 crc kubenswrapper[5125]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 08 19:30:34 crc kubenswrapper[5125]: --disable-webhook \ Dec 08 19:30:34 crc kubenswrapper[5125]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 08 19:30:34 crc kubenswrapper[5125]: --loglevel="${LOGLEVEL}" Dec 08 19:30:34 crc kubenswrapper[5125]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:34 crc kubenswrapper[5125]: > logger="UnhandledError" Dec 08 19:30:34 crc kubenswrapper[5125]: E1208 19:30:34.772266 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-txvvl" podUID="afa3059b-1744-4855-ab93-3133529920d5" Dec 08 19:30:34 crc kubenswrapper[5125]: E1208 19:30:34.773560 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.781562 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.781597 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.781627 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.781640 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.781650 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:34Z","lastTransitionTime":"2025-12-08T19:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.884461 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.884525 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.884543 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.884566 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.884583 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:34Z","lastTransitionTime":"2025-12-08T19:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.986669 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.986726 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.986739 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.986769 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:34 crc kubenswrapper[5125]: I1208 19:30:34.986783 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:34Z","lastTransitionTime":"2025-12-08T19:30:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:35 crc kubenswrapper[5125]: I1208 19:30:35.089963 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:35 crc kubenswrapper[5125]: I1208 19:30:35.090057 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:35 crc kubenswrapper[5125]: I1208 19:30:35.090176 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:35 crc kubenswrapper[5125]: I1208 19:30:35.090217 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:35 crc kubenswrapper[5125]: I1208 19:30:35.090254 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:35Z","lastTransitionTime":"2025-12-08T19:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:35 crc kubenswrapper[5125]: I1208 19:30:35.192798 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:35 crc kubenswrapper[5125]: I1208 19:30:35.192853 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:35 crc kubenswrapper[5125]: I1208 19:30:35.192872 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:35 crc kubenswrapper[5125]: I1208 19:30:35.192896 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:35 crc kubenswrapper[5125]: I1208 19:30:35.192914 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:35Z","lastTransitionTime":"2025-12-08T19:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:35 crc kubenswrapper[5125]: I1208 19:30:35.612379 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:35 crc kubenswrapper[5125]: I1208 19:30:35.612449 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:35 crc kubenswrapper[5125]: I1208 19:30:35.612469 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:35 crc kubenswrapper[5125]: I1208 19:30:35.612494 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:35 crc kubenswrapper[5125]: I1208 19:30:35.612513 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:35Z","lastTransitionTime":"2025-12-08T19:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:35 crc kubenswrapper[5125]: I1208 19:30:35.715260 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:35 crc kubenswrapper[5125]: I1208 19:30:35.715352 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:35 crc kubenswrapper[5125]: I1208 19:30:35.715381 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:35 crc kubenswrapper[5125]: I1208 19:30:35.715419 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:35 crc kubenswrapper[5125]: I1208 19:30:35.715445 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:35Z","lastTransitionTime":"2025-12-08T19:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:35 crc kubenswrapper[5125]: I1208 19:30:35.767081 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:35 crc kubenswrapper[5125]: E1208 19:30:35.767256 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:35 crc kubenswrapper[5125]: I1208 19:30:35.817595 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:35 crc kubenswrapper[5125]: I1208 19:30:35.817712 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:35 crc kubenswrapper[5125]: I1208 19:30:35.817737 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:35 crc kubenswrapper[5125]: I1208 19:30:35.817764 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:35 crc kubenswrapper[5125]: I1208 19:30:35.817783 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:35Z","lastTransitionTime":"2025-12-08T19:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:35 crc kubenswrapper[5125]: I1208 19:30:35.920300 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:35 crc kubenswrapper[5125]: I1208 19:30:35.920362 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:35 crc kubenswrapper[5125]: I1208 19:30:35.920382 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:35 crc kubenswrapper[5125]: I1208 19:30:35.920405 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:35 crc kubenswrapper[5125]: I1208 19:30:35.920424 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:35Z","lastTransitionTime":"2025-12-08T19:30:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.022691 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.022771 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.022791 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.022817 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.022836 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:36Z","lastTransitionTime":"2025-12-08T19:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.125504 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.125578 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.125596 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.125661 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.125680 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:36Z","lastTransitionTime":"2025-12-08T19:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.228402 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.228468 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.228487 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.228511 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.228528 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:36Z","lastTransitionTime":"2025-12-08T19:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.330968 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.331018 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.331031 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.331050 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.331062 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:36Z","lastTransitionTime":"2025-12-08T19:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.433460 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.433519 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.433529 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.433548 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.433559 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:36Z","lastTransitionTime":"2025-12-08T19:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.472460 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:36 crc kubenswrapper[5125]: E1208 19:30:36.472696 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:30:52.472671267 +0000 UTC m=+109.243161541 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.472796 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.472838 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:36 crc kubenswrapper[5125]: E1208 19:30:36.473021 5125 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:36 crc kubenswrapper[5125]: E1208 19:30:36.473036 5125 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:36 crc kubenswrapper[5125]: E1208 19:30:36.473064 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:52.473057138 +0000 UTC m=+109.243547412 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:36 crc kubenswrapper[5125]: E1208 19:30:36.473133 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:52.473104969 +0000 UTC m=+109.243595253 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.473210 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:36 crc kubenswrapper[5125]: E1208 19:30:36.473370 5125 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:36 crc kubenswrapper[5125]: E1208 19:30:36.473389 5125 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:36 crc kubenswrapper[5125]: E1208 19:30:36.473404 5125 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:36 crc kubenswrapper[5125]: E1208 19:30:36.473458 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:52.473445168 +0000 UTC m=+109.243935452 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.536542 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.536674 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.536713 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.536746 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.536767 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:36Z","lastTransitionTime":"2025-12-08T19:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.574323 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.574423 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a677937-278d-4989-b196-40d5daba436d-metrics-certs\") pod \"network-metrics-daemon-7lwbz\" (UID: \"9a677937-278d-4989-b196-40d5daba436d\") " pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:30:36 crc kubenswrapper[5125]: E1208 19:30:36.574524 5125 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:36 crc kubenswrapper[5125]: E1208 19:30:36.574600 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a677937-278d-4989-b196-40d5daba436d-metrics-certs podName:9a677937-278d-4989-b196-40d5daba436d nodeName:}" failed. No retries permitted until 2025-12-08 19:30:52.57457968 +0000 UTC m=+109.345069964 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9a677937-278d-4989-b196-40d5daba436d-metrics-certs") pod "network-metrics-daemon-7lwbz" (UID: "9a677937-278d-4989-b196-40d5daba436d") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:36 crc kubenswrapper[5125]: E1208 19:30:36.574723 5125 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:36 crc kubenswrapper[5125]: E1208 19:30:36.574764 5125 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:36 crc kubenswrapper[5125]: E1208 19:30:36.574801 5125 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:36 crc kubenswrapper[5125]: E1208 19:30:36.574898 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 19:30:52.574855458 +0000 UTC m=+109.345345742 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.639325 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.639412 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.639430 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.639457 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.639476 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:36Z","lastTransitionTime":"2025-12-08T19:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.742086 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.742166 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.742190 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.742217 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.742239 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:36Z","lastTransitionTime":"2025-12-08T19:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.766810 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.766810 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.766814 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:30:36 crc kubenswrapper[5125]: E1208 19:30:36.767134 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:36 crc kubenswrapper[5125]: E1208 19:30:36.767211 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7lwbz" podUID="9a677937-278d-4989-b196-40d5daba436d" Dec 08 19:30:36 crc kubenswrapper[5125]: E1208 19:30:36.767901 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.768494 5125 scope.go:117] "RemoveContainer" containerID="346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af" Dec 08 19:30:36 crc kubenswrapper[5125]: E1208 19:30:36.768861 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.844549 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.844592 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.844602 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.844630 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.844640 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:36Z","lastTransitionTime":"2025-12-08T19:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.947183 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.947251 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.947272 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.947301 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:36 crc kubenswrapper[5125]: I1208 19:30:36.947324 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:36Z","lastTransitionTime":"2025-12-08T19:30:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.050415 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.050492 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.050511 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.050591 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.050642 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:37Z","lastTransitionTime":"2025-12-08T19:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.152705 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.152767 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.152784 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.152808 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.152826 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:37Z","lastTransitionTime":"2025-12-08T19:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.255599 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.255718 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.255744 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.255773 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.255796 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:37Z","lastTransitionTime":"2025-12-08T19:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.358387 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.358470 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.358490 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.358532 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.358570 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:37Z","lastTransitionTime":"2025-12-08T19:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.462476 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.462550 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.462577 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.462663 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.462690 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:37Z","lastTransitionTime":"2025-12-08T19:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.565862 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.566175 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.566195 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.566220 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.566239 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:37Z","lastTransitionTime":"2025-12-08T19:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.668144 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.668216 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.668232 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.668253 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.668271 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:37Z","lastTransitionTime":"2025-12-08T19:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.766473 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:37 crc kubenswrapper[5125]: E1208 19:30:37.766701 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:37 crc kubenswrapper[5125]: E1208 19:30:37.769668 5125 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:37 crc kubenswrapper[5125]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Dec 08 19:30:37 crc kubenswrapper[5125]: set -euo pipefail Dec 08 19:30:37 crc kubenswrapper[5125]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Dec 08 19:30:37 crc kubenswrapper[5125]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Dec 08 19:30:37 crc kubenswrapper[5125]: # As the secret mount is optional we must wait for the files to be present. Dec 08 19:30:37 crc kubenswrapper[5125]: # The service is created in monitor.yaml and this is created in sdn.yaml. Dec 08 19:30:37 crc kubenswrapper[5125]: TS=$(date +%s) Dec 08 19:30:37 crc kubenswrapper[5125]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Dec 08 19:30:37 crc kubenswrapper[5125]: HAS_LOGGED_INFO=0 Dec 08 19:30:37 crc kubenswrapper[5125]: Dec 08 19:30:37 crc kubenswrapper[5125]: log_missing_certs(){ Dec 08 19:30:37 crc kubenswrapper[5125]: CUR_TS=$(date +%s) Dec 08 19:30:37 crc kubenswrapper[5125]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Dec 08 19:30:37 crc kubenswrapper[5125]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Dec 08 19:30:37 crc kubenswrapper[5125]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Dec 08 19:30:37 crc kubenswrapper[5125]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Dec 08 19:30:37 crc kubenswrapper[5125]: HAS_LOGGED_INFO=1 Dec 08 19:30:37 crc kubenswrapper[5125]: fi Dec 08 19:30:37 crc kubenswrapper[5125]: } Dec 08 19:30:37 crc kubenswrapper[5125]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Dec 08 19:30:37 crc kubenswrapper[5125]: log_missing_certs Dec 08 19:30:37 crc kubenswrapper[5125]: sleep 5 Dec 08 19:30:37 crc kubenswrapper[5125]: done Dec 08 19:30:37 crc kubenswrapper[5125]: Dec 08 19:30:37 crc kubenswrapper[5125]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Dec 08 19:30:37 crc kubenswrapper[5125]: exec /usr/bin/kube-rbac-proxy \ Dec 08 19:30:37 crc kubenswrapper[5125]: --logtostderr \ Dec 08 19:30:37 crc kubenswrapper[5125]: --secure-listen-address=:9108 \ Dec 08 19:30:37 crc kubenswrapper[5125]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Dec 08 19:30:37 crc kubenswrapper[5125]: --upstream=http://127.0.0.1:29108/ \ Dec 08 19:30:37 crc kubenswrapper[5125]: --tls-private-key-file=${TLS_PK} \ Dec 08 19:30:37 crc kubenswrapper[5125]: --tls-cert-file=${TLS_CERT} Dec 08 19:30:37 crc kubenswrapper[5125]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-twvrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-w8mbx_openshift-ovn-kubernetes(48d0e864-6620-4a75-baa4-8653836f3aab): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:37 crc kubenswrapper[5125]: > logger="UnhandledError" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.770804 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.770843 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.770859 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.770875 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.770886 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:37Z","lastTransitionTime":"2025-12-08T19:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:37 crc kubenswrapper[5125]: E1208 19:30:37.772723 5125 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 19:30:37 crc kubenswrapper[5125]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 19:30:37 crc kubenswrapper[5125]: if [[ -f "/env/_master" ]]; then Dec 08 19:30:37 crc kubenswrapper[5125]: set -o allexport Dec 08 19:30:37 crc kubenswrapper[5125]: source "/env/_master" Dec 08 19:30:37 crc kubenswrapper[5125]: set +o allexport Dec 08 19:30:37 crc kubenswrapper[5125]: fi Dec 08 19:30:37 crc kubenswrapper[5125]: Dec 08 19:30:37 crc kubenswrapper[5125]: ovn_v4_join_subnet_opt= Dec 08 19:30:37 crc kubenswrapper[5125]: if [[ "" != "" ]]; then Dec 08 19:30:37 crc kubenswrapper[5125]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Dec 08 19:30:37 crc kubenswrapper[5125]: fi Dec 08 19:30:37 crc kubenswrapper[5125]: ovn_v6_join_subnet_opt= Dec 08 19:30:37 crc kubenswrapper[5125]: if [[ "" != "" ]]; then Dec 08 19:30:37 crc kubenswrapper[5125]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Dec 08 19:30:37 crc kubenswrapper[5125]: fi Dec 08 19:30:37 crc kubenswrapper[5125]: Dec 08 19:30:37 crc kubenswrapper[5125]: ovn_v4_transit_switch_subnet_opt= Dec 08 19:30:37 crc kubenswrapper[5125]: if [[ "" != "" ]]; then Dec 08 19:30:37 crc kubenswrapper[5125]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Dec 08 19:30:37 crc kubenswrapper[5125]: fi Dec 08 19:30:37 crc kubenswrapper[5125]: ovn_v6_transit_switch_subnet_opt= Dec 08 19:30:37 crc kubenswrapper[5125]: if [[ "" != "" ]]; then Dec 08 19:30:37 crc kubenswrapper[5125]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Dec 08 19:30:37 crc kubenswrapper[5125]: fi Dec 08 19:30:37 crc kubenswrapper[5125]: Dec 08 19:30:37 crc kubenswrapper[5125]: dns_name_resolver_enabled_flag= Dec 08 19:30:37 crc kubenswrapper[5125]: if [[ "false" == "true" ]]; then Dec 08 19:30:37 crc kubenswrapper[5125]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Dec 08 19:30:37 crc kubenswrapper[5125]: fi Dec 08 19:30:37 crc kubenswrapper[5125]: Dec 08 19:30:37 crc kubenswrapper[5125]: persistent_ips_enabled_flag="--enable-persistent-ips" Dec 08 19:30:37 crc kubenswrapper[5125]: Dec 08 19:30:37 crc kubenswrapper[5125]: # This is needed so that converting clusters from GA to TP Dec 08 19:30:37 crc kubenswrapper[5125]: # will rollout control plane pods as well Dec 08 19:30:37 crc kubenswrapper[5125]: network_segmentation_enabled_flag= Dec 08 19:30:37 crc kubenswrapper[5125]: multi_network_enabled_flag= Dec 08 19:30:37 crc kubenswrapper[5125]: if [[ "true" == "true" ]]; then Dec 08 19:30:37 crc kubenswrapper[5125]: multi_network_enabled_flag="--enable-multi-network" Dec 08 19:30:37 crc kubenswrapper[5125]: fi Dec 08 19:30:37 crc kubenswrapper[5125]: if [[ "true" == "true" ]]; then Dec 08 19:30:37 crc kubenswrapper[5125]: if [[ "true" != "true" ]]; then Dec 08 19:30:37 crc kubenswrapper[5125]: multi_network_enabled_flag="--enable-multi-network" Dec 08 19:30:37 crc kubenswrapper[5125]: fi Dec 08 19:30:37 crc kubenswrapper[5125]: network_segmentation_enabled_flag="--enable-network-segmentation" Dec 08 19:30:37 crc kubenswrapper[5125]: fi Dec 08 19:30:37 crc kubenswrapper[5125]: Dec 08 19:30:37 crc kubenswrapper[5125]: route_advertisements_enable_flag= Dec 08 19:30:37 crc kubenswrapper[5125]: if [[ "false" == "true" ]]; then Dec 08 19:30:37 crc kubenswrapper[5125]: route_advertisements_enable_flag="--enable-route-advertisements" Dec 08 19:30:37 crc kubenswrapper[5125]: fi Dec 08 19:30:37 crc kubenswrapper[5125]: Dec 08 19:30:37 crc kubenswrapper[5125]: preconfigured_udn_addresses_enable_flag= Dec 08 19:30:37 crc kubenswrapper[5125]: if [[ "false" == "true" ]]; then Dec 08 19:30:37 crc kubenswrapper[5125]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Dec 08 19:30:37 crc kubenswrapper[5125]: fi Dec 08 19:30:37 crc kubenswrapper[5125]: Dec 08 19:30:37 crc kubenswrapper[5125]: # Enable multi-network policy if configured (control-plane always full mode) Dec 08 19:30:37 crc kubenswrapper[5125]: multi_network_policy_enabled_flag= Dec 08 19:30:37 crc kubenswrapper[5125]: if [[ "false" == "true" ]]; then Dec 08 19:30:37 crc kubenswrapper[5125]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Dec 08 19:30:37 crc kubenswrapper[5125]: fi Dec 08 19:30:37 crc kubenswrapper[5125]: Dec 08 19:30:37 crc kubenswrapper[5125]: # Enable admin network policy if configured (control-plane always full mode) Dec 08 19:30:37 crc kubenswrapper[5125]: admin_network_policy_enabled_flag= Dec 08 19:30:37 crc kubenswrapper[5125]: if [[ "true" == "true" ]]; then Dec 08 19:30:37 crc kubenswrapper[5125]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Dec 08 19:30:37 crc kubenswrapper[5125]: fi Dec 08 19:30:37 crc kubenswrapper[5125]: Dec 08 19:30:37 crc kubenswrapper[5125]: if [ "shared" == "shared" ]; then Dec 08 19:30:37 crc kubenswrapper[5125]: gateway_mode_flags="--gateway-mode shared" Dec 08 19:30:37 crc kubenswrapper[5125]: elif [ "shared" == "local" ]; then Dec 08 19:30:37 crc kubenswrapper[5125]: gateway_mode_flags="--gateway-mode local" Dec 08 19:30:37 crc kubenswrapper[5125]: else Dec 08 19:30:37 crc kubenswrapper[5125]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Dec 08 19:30:37 crc kubenswrapper[5125]: exit 1 Dec 08 19:30:37 crc kubenswrapper[5125]: fi Dec 08 19:30:37 crc kubenswrapper[5125]: Dec 08 19:30:37 crc kubenswrapper[5125]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Dec 08 19:30:37 crc kubenswrapper[5125]: exec /usr/bin/ovnkube \ Dec 08 19:30:37 crc kubenswrapper[5125]: --enable-interconnect \ Dec 08 19:30:37 crc kubenswrapper[5125]: --init-cluster-manager "${K8S_NODE}" \ Dec 08 19:30:37 crc kubenswrapper[5125]: --config-file=/run/ovnkube-config/ovnkube.conf \ Dec 08 19:30:37 crc kubenswrapper[5125]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Dec 08 19:30:37 crc kubenswrapper[5125]: --metrics-bind-address "127.0.0.1:29108" \ Dec 08 19:30:37 crc kubenswrapper[5125]: --metrics-enable-pprof \ Dec 08 19:30:37 crc kubenswrapper[5125]: --metrics-enable-config-duration \ Dec 08 19:30:37 crc kubenswrapper[5125]: ${ovn_v4_join_subnet_opt} \ Dec 08 19:30:37 crc kubenswrapper[5125]: ${ovn_v6_join_subnet_opt} \ Dec 08 19:30:37 crc kubenswrapper[5125]: ${ovn_v4_transit_switch_subnet_opt} \ Dec 08 19:30:37 crc kubenswrapper[5125]: ${ovn_v6_transit_switch_subnet_opt} \ Dec 08 19:30:37 crc kubenswrapper[5125]: ${dns_name_resolver_enabled_flag} \ Dec 08 19:30:37 crc kubenswrapper[5125]: ${persistent_ips_enabled_flag} \ Dec 08 19:30:37 crc kubenswrapper[5125]: ${multi_network_enabled_flag} \ Dec 08 19:30:37 crc kubenswrapper[5125]: ${network_segmentation_enabled_flag} \ Dec 08 19:30:37 crc kubenswrapper[5125]: ${gateway_mode_flags} \ Dec 08 19:30:37 crc kubenswrapper[5125]: ${route_advertisements_enable_flag} \ Dec 08 19:30:37 crc kubenswrapper[5125]: ${preconfigured_udn_addresses_enable_flag} \ Dec 08 19:30:37 crc kubenswrapper[5125]: --enable-egress-ip=true \ Dec 08 19:30:37 crc kubenswrapper[5125]: --enable-egress-firewall=true \ Dec 08 19:30:37 crc kubenswrapper[5125]: --enable-egress-qos=true \ Dec 08 19:30:37 crc kubenswrapper[5125]: --enable-egress-service=true \ Dec 08 19:30:37 crc kubenswrapper[5125]: --enable-multicast \ Dec 08 19:30:37 crc kubenswrapper[5125]: --enable-multi-external-gateway=true \ Dec 08 19:30:37 crc kubenswrapper[5125]: ${multi_network_policy_enabled_flag} \ Dec 08 19:30:37 crc kubenswrapper[5125]: ${admin_network_policy_enabled_flag} Dec 08 19:30:37 crc kubenswrapper[5125]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-twvrb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-w8mbx_openshift-ovn-kubernetes(48d0e864-6620-4a75-baa4-8653836f3aab): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 19:30:37 crc kubenswrapper[5125]: > logger="UnhandledError" Dec 08 19:30:37 crc kubenswrapper[5125]: E1208 19:30:37.774124 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" podUID="48d0e864-6620-4a75-baa4-8653836f3aab" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.873569 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.873667 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.873684 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.873710 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.873729 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:37Z","lastTransitionTime":"2025-12-08T19:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.976083 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.976151 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.976165 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.976187 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:37 crc kubenswrapper[5125]: I1208 19:30:37.976201 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:37Z","lastTransitionTime":"2025-12-08T19:30:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.078548 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.078654 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.078675 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.078701 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.078719 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:38Z","lastTransitionTime":"2025-12-08T19:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.181087 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.181185 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.181212 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.181243 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.181265 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:38Z","lastTransitionTime":"2025-12-08T19:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.283900 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.283949 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.283961 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.283978 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.283993 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:38Z","lastTransitionTime":"2025-12-08T19:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.299150 5125 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.387188 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.387261 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.387280 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.387305 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.387324 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:38Z","lastTransitionTime":"2025-12-08T19:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.490019 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.490113 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.490138 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.490169 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.490192 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:38Z","lastTransitionTime":"2025-12-08T19:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.592794 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.592855 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.592873 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.592896 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.592914 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:38Z","lastTransitionTime":"2025-12-08T19:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.699993 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.700240 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.700253 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.700270 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.700301 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:38Z","lastTransitionTime":"2025-12-08T19:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.766795 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:38 crc kubenswrapper[5125]: E1208 19:30:38.767003 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.767105 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.767183 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:30:38 crc kubenswrapper[5125]: E1208 19:30:38.767329 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:38 crc kubenswrapper[5125]: E1208 19:30:38.767660 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7lwbz" podUID="9a677937-278d-4989-b196-40d5daba436d" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.803936 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.803995 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.804013 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.804039 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.804058 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:38Z","lastTransitionTime":"2025-12-08T19:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.906088 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.906159 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.906174 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.906195 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:38 crc kubenswrapper[5125]: I1208 19:30:38.906213 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:38Z","lastTransitionTime":"2025-12-08T19:30:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.008311 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.008892 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.008933 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.008961 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.008984 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:39Z","lastTransitionTime":"2025-12-08T19:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.111173 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.111246 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.111261 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.111280 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.111293 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:39Z","lastTransitionTime":"2025-12-08T19:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.212908 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.212952 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.212963 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.212978 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.212989 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:39Z","lastTransitionTime":"2025-12-08T19:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.315421 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.315483 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.315496 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.315517 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.315531 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:39Z","lastTransitionTime":"2025-12-08T19:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.417979 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.418048 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.418063 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.418082 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.418097 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:39Z","lastTransitionTime":"2025-12-08T19:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.520184 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.520231 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.520243 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.520260 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.520271 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:39Z","lastTransitionTime":"2025-12-08T19:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.627174 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.627360 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.627390 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.627406 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.627415 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:39Z","lastTransitionTime":"2025-12-08T19:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.729487 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.729569 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.729588 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.729657 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.729681 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:39Z","lastTransitionTime":"2025-12-08T19:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.767988 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:39 crc kubenswrapper[5125]: E1208 19:30:39.768158 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.831770 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.831835 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.831853 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.831876 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.831895 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:39Z","lastTransitionTime":"2025-12-08T19:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.934443 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.934505 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.934522 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.934542 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:39 crc kubenswrapper[5125]: I1208 19:30:39.934556 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:39Z","lastTransitionTime":"2025-12-08T19:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.037392 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.037665 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.037757 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.037855 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.037950 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:40Z","lastTransitionTime":"2025-12-08T19:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.140222 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.140561 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.140722 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.140850 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.140967 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:40Z","lastTransitionTime":"2025-12-08T19:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.243401 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.243992 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.244155 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.244310 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.244456 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:40Z","lastTransitionTime":"2025-12-08T19:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.347008 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.347112 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.347143 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.347172 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.347195 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:40Z","lastTransitionTime":"2025-12-08T19:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.449471 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.449515 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.449529 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.449544 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.449558 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:40Z","lastTransitionTime":"2025-12-08T19:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.552680 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.552773 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.552793 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.552818 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.552835 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:40Z","lastTransitionTime":"2025-12-08T19:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.655056 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.655096 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.655107 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.655120 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.655133 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:40Z","lastTransitionTime":"2025-12-08T19:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.758255 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.758980 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.759203 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.759430 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.759731 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:40Z","lastTransitionTime":"2025-12-08T19:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.766782 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:40 crc kubenswrapper[5125]: E1208 19:30:40.767004 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.767065 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:30:40 crc kubenswrapper[5125]: E1208 19:30:40.767265 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7lwbz" podUID="9a677937-278d-4989-b196-40d5daba436d" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.767466 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:40 crc kubenswrapper[5125]: E1208 19:30:40.767800 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.862321 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.862398 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.862421 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.862450 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.862472 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:40Z","lastTransitionTime":"2025-12-08T19:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.965164 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.965424 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.965526 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.965684 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:40 crc kubenswrapper[5125]: I1208 19:30:40.965805 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:40Z","lastTransitionTime":"2025-12-08T19:30:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.068219 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.068289 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.068314 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.068345 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.068370 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:41Z","lastTransitionTime":"2025-12-08T19:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.170362 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.170404 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.170416 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.170432 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.170443 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:41Z","lastTransitionTime":"2025-12-08T19:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.273291 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.273998 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.274037 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.274064 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.274084 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:41Z","lastTransitionTime":"2025-12-08T19:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.376970 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.377051 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.377079 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.377110 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.377134 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:41Z","lastTransitionTime":"2025-12-08T19:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.479382 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.479546 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.479568 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.479589 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.479639 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:41Z","lastTransitionTime":"2025-12-08T19:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.582392 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.582448 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.582462 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.582479 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.582492 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:41Z","lastTransitionTime":"2025-12-08T19:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.684544 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.684580 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.684589 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.684602 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.684633 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:41Z","lastTransitionTime":"2025-12-08T19:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.766921 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:41 crc kubenswrapper[5125]: E1208 19:30:41.767159 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.787457 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.787500 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.787519 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.787541 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.787562 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:41Z","lastTransitionTime":"2025-12-08T19:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.889833 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.889878 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.889889 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.889902 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.889915 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:41Z","lastTransitionTime":"2025-12-08T19:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.991681 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.992077 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.992338 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.992502 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.992740 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:41Z","lastTransitionTime":"2025-12-08T19:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.998313 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.998522 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.998741 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.998956 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:41 crc kubenswrapper[5125]: I1208 19:30:41.999120 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:41Z","lastTransitionTime":"2025-12-08T19:30:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:42 crc kubenswrapper[5125]: E1208 19:30:42.016498 5125 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc970274-9f45-4e00-af2e-908ff2f74194\\\",\\\"systemUUID\\\":\\\"3204b44a-5260-4c04-b0d1-92575bcb7d69\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.021954 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.022371 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.022578 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.022866 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.023310 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:42Z","lastTransitionTime":"2025-12-08T19:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:42 crc kubenswrapper[5125]: E1208 19:30:42.039661 5125 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc970274-9f45-4e00-af2e-908ff2f74194\\\",\\\"systemUUID\\\":\\\"3204b44a-5260-4c04-b0d1-92575bcb7d69\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.044413 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.044477 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.044504 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.044537 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.044561 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:42Z","lastTransitionTime":"2025-12-08T19:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:42 crc kubenswrapper[5125]: E1208 19:30:42.062656 5125 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc970274-9f45-4e00-af2e-908ff2f74194\\\",\\\"systemUUID\\\":\\\"3204b44a-5260-4c04-b0d1-92575bcb7d69\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.067982 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.068035 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.068056 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.068080 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.068097 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:42Z","lastTransitionTime":"2025-12-08T19:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:42 crc kubenswrapper[5125]: E1208 19:30:42.080172 5125 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc970274-9f45-4e00-af2e-908ff2f74194\\\",\\\"systemUUID\\\":\\\"3204b44a-5260-4c04-b0d1-92575bcb7d69\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.084816 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.084883 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.084911 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.084939 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.084962 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:42Z","lastTransitionTime":"2025-12-08T19:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:42 crc kubenswrapper[5125]: E1208 19:30:42.098943 5125 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400456Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861256Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:30:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cc970274-9f45-4e00-af2e-908ff2f74194\\\",\\\"systemUUID\\\":\\\"3204b44a-5260-4c04-b0d1-92575bcb7d69\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:42 crc kubenswrapper[5125]: E1208 19:30:42.099645 5125 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.101426 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.101483 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.101506 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.101531 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.101554 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:42Z","lastTransitionTime":"2025-12-08T19:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.203694 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.203776 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.203803 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.203833 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.203855 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:42Z","lastTransitionTime":"2025-12-08T19:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.306450 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.306694 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.306728 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.306760 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.306782 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:42Z","lastTransitionTime":"2025-12-08T19:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.409357 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.409429 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.409452 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.409482 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.409504 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:42Z","lastTransitionTime":"2025-12-08T19:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.512526 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.512573 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.512586 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.512601 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.512634 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:42Z","lastTransitionTime":"2025-12-08T19:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.615475 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.615556 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.615575 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.615600 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.615658 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:42Z","lastTransitionTime":"2025-12-08T19:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.718002 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.718089 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.718117 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.718143 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.718162 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:42Z","lastTransitionTime":"2025-12-08T19:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.767109 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.767165 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:42 crc kubenswrapper[5125]: E1208 19:30:42.767276 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.767396 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:30:42 crc kubenswrapper[5125]: E1208 19:30:42.767598 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:42 crc kubenswrapper[5125]: E1208 19:30:42.767885 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7lwbz" podUID="9a677937-278d-4989-b196-40d5daba436d" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.820477 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.820549 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.820573 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.820651 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.820676 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:42Z","lastTransitionTime":"2025-12-08T19:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.923361 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.923443 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.923463 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.923490 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:42 crc kubenswrapper[5125]: I1208 19:30:42.923531 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:42Z","lastTransitionTime":"2025-12-08T19:30:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.025467 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.025672 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.025695 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.025718 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.025735 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:43Z","lastTransitionTime":"2025-12-08T19:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.127994 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.128037 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.128050 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.128064 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.128073 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:43Z","lastTransitionTime":"2025-12-08T19:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.229951 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.230000 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.230011 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.230028 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.230038 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:43Z","lastTransitionTime":"2025-12-08T19:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.332193 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.332264 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.332284 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.332303 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.332315 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:43Z","lastTransitionTime":"2025-12-08T19:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.434148 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.434189 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.434204 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.434218 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.434229 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:43Z","lastTransitionTime":"2025-12-08T19:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.536100 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.536189 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.536211 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.536241 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.536259 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:43Z","lastTransitionTime":"2025-12-08T19:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.639160 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.639217 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.639232 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.639254 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.639273 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:43Z","lastTransitionTime":"2025-12-08T19:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.742367 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.742449 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.742475 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.742507 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.742529 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:43Z","lastTransitionTime":"2025-12-08T19:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.768320 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:43 crc kubenswrapper[5125]: E1208 19:30:43.768577 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.788496 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9p7g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b938d768-ccce-45a6-a982-3f5d6f1a7d98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzwqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p7g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.800968 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2309c211-00a6-48e5-b99d-349b71a11862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caca8af5e19887a7e6708058ea051494b18a37f74e2c31cc984ee9e38f34a397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.822060 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7be318f-1e5a-4c9b-aff6-a0d7423fb520\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://51dd4ebaac488ab269d08cb3c6bd1ab70695582228b86f0ee98bcf2efe730911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d1a6ee7cc39cbce21b5d44e71db4af1388154261b0f4e46bf80a1c6aace1d18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6be3cefe94889f1e79893ae2e0cbc2c0e19b158c8b5d1fc78c2396198cdf1b63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b524051750cb775841e22d8cd5239926fb9dbb19325e7c8e9d0593caeab1da19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.838744 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.847377 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.847445 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.847470 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.847502 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.847525 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:43Z","lastTransitionTime":"2025-12-08T19:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.858985 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.872364 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7lwbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a677937-278d-4989-b196-40d5daba436d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8qzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8qzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7lwbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.883086 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d0e864-6620-4a75-baa4-8653836f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-w8mbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.897245 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd8c208-b235-420d-aa03-61fb487f40bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45dfdf1c59b5fb6c4c2329c90a050ab925412e0e70f48b865bbd4261ba6cf841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://df8ae2ed1ee6f83e167f23dd7edc5eaf5e881de6ea7d042f3d4184090b0cf6be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb9c33205053ee254860f931fb8051f331e26827a53bee03ec0451ad1c36124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.908478 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjj2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05229a97-6cb6-4842-9ec3-f68831b2daf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdnq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjj2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.925386 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjgzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25c18b2-98b7-4c40-a059-08f4821dea99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjgzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.934501 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-txvvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afa3059b-1744-4855-ab93-3133529920d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptppk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-txvvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.945204 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a65da2-1f6c-4d8c-9235-319e35ed53e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://a5e4699670d62181c1fafae8281271f7dd7e3a3694a21aa85a0431dc61994c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d33cb163457c854b355765916b3c29d258a9b0db805a51c89bd221aba35fb12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8c37e3585615ba4ff1e0e7d348bf306b89181474b72aebe5290f9cf2a9c706d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1208 19:30:12.581927 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 19:30:12.582093 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 19:30:12.582975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1705152817/tls.crt::/tmp/serving-cert-1705152817/tls.key\\\\\\\"\\\\nI1208 19:30:13.192261 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 19:30:13.193899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 19:30:13.193911 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 19:30:13.193933 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 19:30:13.193938 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 19:30:13.196934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 19:30:13.196955 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 19:30:13.196960 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.196966 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.196970 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 19:30:13.196973 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 19:30:13.196975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 19:30:13.196978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 19:30:13.198675 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://be7cc8d52376599fa6e20ccc45f43544f765f5d0ca901360045e14c3441a4c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.948901 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.948948 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.948960 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.948978 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.948990 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:43Z","lastTransitionTime":"2025-12-08T19:30:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.955141 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.975829 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a16dd26-4f2d-422b-a3e7-459ca70d7925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://e9ed6b4f2152ebdc1484f71e24ba072cbf2b01f9d9feba86cfb7389754fdec5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://dffc632ffcdfed24afccbe6a28e61941232e1cd2efcbafd1f092ab148c0c1697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1b8499c0a2bf34333f40c474c394b71a76350a7fc194553cf807f2d5faa889c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd518b12329a228d3ba235314af632769596b1ca8a854f2caf622b9c3847816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8976fcbc73296c5af4cb1d7b4056d864b7d2cae6c8b19dc656ba85a228d2d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.984183 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:43 crc kubenswrapper[5125]: I1208 19:30:43.992261 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.002914 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.015662 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aabf1825-0c19-45de-9f9e-fe94777752e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k9whn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.024449 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8cea827-b8e3-4d92-adea-df0afd2397da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c9bz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c9bz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slhjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.050493 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.050535 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.050543 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.050560 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.050571 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:44Z","lastTransitionTime":"2025-12-08T19:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.145544 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-jjj2h" event={"ID":"05229a97-6cb6-4842-9ec3-f68831b2daf5","Type":"ContainerStarted","Data":"cc3032e8f610d4cc9daa6be30c39f50c5d4bd6f22253126a48566e8e3ef40af1"} Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.152991 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.153028 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.153042 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.153063 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.153077 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:44Z","lastTransitionTime":"2025-12-08T19:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.153716 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7lwbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a677937-278d-4989-b196-40d5daba436d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8qzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8qzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7lwbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.161703 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d0e864-6620-4a75-baa4-8653836f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-w8mbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.170572 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd8c208-b235-420d-aa03-61fb487f40bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45dfdf1c59b5fb6c4c2329c90a050ab925412e0e70f48b865bbd4261ba6cf841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://df8ae2ed1ee6f83e167f23dd7edc5eaf5e881de6ea7d042f3d4184090b0cf6be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb9c33205053ee254860f931fb8051f331e26827a53bee03ec0451ad1c36124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.177396 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjj2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05229a97-6cb6-4842-9ec3-f68831b2daf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc3032e8f610d4cc9daa6be30c39f50c5d4bd6f22253126a48566e8e3ef40af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdnq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjj2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.187756 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjgzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25c18b2-98b7-4c40-a059-08f4821dea99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjgzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.193762 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-txvvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afa3059b-1744-4855-ab93-3133529920d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptppk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-txvvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.213780 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a65da2-1f6c-4d8c-9235-319e35ed53e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://a5e4699670d62181c1fafae8281271f7dd7e3a3694a21aa85a0431dc61994c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d33cb163457c854b355765916b3c29d258a9b0db805a51c89bd221aba35fb12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8c37e3585615ba4ff1e0e7d348bf306b89181474b72aebe5290f9cf2a9c706d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1208 19:30:12.581927 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 19:30:12.582093 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 19:30:12.582975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1705152817/tls.crt::/tmp/serving-cert-1705152817/tls.key\\\\\\\"\\\\nI1208 19:30:13.192261 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 19:30:13.193899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 19:30:13.193911 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 19:30:13.193933 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 19:30:13.193938 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 19:30:13.196934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 19:30:13.196955 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 19:30:13.196960 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.196966 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.196970 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 19:30:13.196973 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 19:30:13.196975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 19:30:13.196978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 19:30:13.198675 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://be7cc8d52376599fa6e20ccc45f43544f765f5d0ca901360045e14c3441a4c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.223197 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.239760 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a16dd26-4f2d-422b-a3e7-459ca70d7925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://e9ed6b4f2152ebdc1484f71e24ba072cbf2b01f9d9feba86cfb7389754fdec5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://dffc632ffcdfed24afccbe6a28e61941232e1cd2efcbafd1f092ab148c0c1697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1b8499c0a2bf34333f40c474c394b71a76350a7fc194553cf807f2d5faa889c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd518b12329a228d3ba235314af632769596b1ca8a854f2caf622b9c3847816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8976fcbc73296c5af4cb1d7b4056d864b7d2cae6c8b19dc656ba85a228d2d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.248187 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.255142 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.255185 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.255219 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.255231 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.255248 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.255263 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:44Z","lastTransitionTime":"2025-12-08T19:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.269488 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.296459 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aabf1825-0c19-45de-9f9e-fe94777752e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k9whn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.307292 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8cea827-b8e3-4d92-adea-df0afd2397da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c9bz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c9bz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slhjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.320162 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9p7g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b938d768-ccce-45a6-a982-3f5d6f1a7d98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzwqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p7g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.334996 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2309c211-00a6-48e5-b99d-349b71a11862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caca8af5e19887a7e6708058ea051494b18a37f74e2c31cc984ee9e38f34a397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.349534 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7be318f-1e5a-4c9b-aff6-a0d7423fb520\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://51dd4ebaac488ab269d08cb3c6bd1ab70695582228b86f0ee98bcf2efe730911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d1a6ee7cc39cbce21b5d44e71db4af1388154261b0f4e46bf80a1c6aace1d18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6be3cefe94889f1e79893ae2e0cbc2c0e19b158c8b5d1fc78c2396198cdf1b63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b524051750cb775841e22d8cd5239926fb9dbb19325e7c8e9d0593caeab1da19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.357543 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.357584 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.357596 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.357643 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.357656 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:44Z","lastTransitionTime":"2025-12-08T19:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.360808 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.371326 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.460125 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.460184 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.460202 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.460222 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.460238 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:44Z","lastTransitionTime":"2025-12-08T19:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.562890 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.562956 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.562976 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.563000 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.563019 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:44Z","lastTransitionTime":"2025-12-08T19:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.666026 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.666085 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.666102 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.666124 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.666144 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:44Z","lastTransitionTime":"2025-12-08T19:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.768673 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.768764 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:44 crc kubenswrapper[5125]: E1208 19:30:44.768942 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:44 crc kubenswrapper[5125]: E1208 19:30:44.769878 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.770025 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:30:44 crc kubenswrapper[5125]: E1208 19:30:44.770183 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7lwbz" podUID="9a677937-278d-4989-b196-40d5daba436d" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.770726 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.770809 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.770838 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.771035 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.771066 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:44Z","lastTransitionTime":"2025-12-08T19:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.873897 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.874338 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.874701 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.874783 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.874881 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:44Z","lastTransitionTime":"2025-12-08T19:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.977535 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.977579 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.977591 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.977632 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:44 crc kubenswrapper[5125]: I1208 19:30:44.977645 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:44Z","lastTransitionTime":"2025-12-08T19:30:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.081209 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.081289 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.081316 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.081346 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.081368 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:45Z","lastTransitionTime":"2025-12-08T19:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.149951 5125 generic.go:358] "Generic (PLEG): container finished" podID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerID="79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8" exitCode=0 Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.150030 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" event={"ID":"aabf1825-0c19-45de-9f9e-fe94777752e6","Type":"ContainerDied","Data":"79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8"} Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.182795 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a16dd26-4f2d-422b-a3e7-459ca70d7925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://e9ed6b4f2152ebdc1484f71e24ba072cbf2b01f9d9feba86cfb7389754fdec5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://dffc632ffcdfed24afccbe6a28e61941232e1cd2efcbafd1f092ab148c0c1697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1b8499c0a2bf34333f40c474c394b71a76350a7fc194553cf807f2d5faa889c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd518b12329a228d3ba235314af632769596b1ca8a854f2caf622b9c3847816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8976fcbc73296c5af4cb1d7b4056d864b7d2cae6c8b19dc656ba85a228d2d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.184232 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.184303 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.184321 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.184343 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.184359 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:45Z","lastTransitionTime":"2025-12-08T19:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.196786 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.211759 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.224992 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.278966 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aabf1825-0c19-45de-9f9e-fe94777752e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k9whn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.285807 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.285834 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.285843 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.285855 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.285864 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:45Z","lastTransitionTime":"2025-12-08T19:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.306437 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8cea827-b8e3-4d92-adea-df0afd2397da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c9bz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c9bz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slhjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.319564 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9p7g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b938d768-ccce-45a6-a982-3f5d6f1a7d98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzwqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p7g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.327011 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2309c211-00a6-48e5-b99d-349b71a11862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caca8af5e19887a7e6708058ea051494b18a37f74e2c31cc984ee9e38f34a397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.339517 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7be318f-1e5a-4c9b-aff6-a0d7423fb520\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://51dd4ebaac488ab269d08cb3c6bd1ab70695582228b86f0ee98bcf2efe730911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d1a6ee7cc39cbce21b5d44e71db4af1388154261b0f4e46bf80a1c6aace1d18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6be3cefe94889f1e79893ae2e0cbc2c0e19b158c8b5d1fc78c2396198cdf1b63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b524051750cb775841e22d8cd5239926fb9dbb19325e7c8e9d0593caeab1da19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.352207 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.363539 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.373357 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7lwbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a677937-278d-4989-b196-40d5daba436d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8qzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8qzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7lwbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.384747 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d0e864-6620-4a75-baa4-8653836f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-w8mbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.387853 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.387910 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.387925 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.387941 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.387954 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:45Z","lastTransitionTime":"2025-12-08T19:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.393476 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd8c208-b235-420d-aa03-61fb487f40bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45dfdf1c59b5fb6c4c2329c90a050ab925412e0e70f48b865bbd4261ba6cf841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://df8ae2ed1ee6f83e167f23dd7edc5eaf5e881de6ea7d042f3d4184090b0cf6be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb9c33205053ee254860f931fb8051f331e26827a53bee03ec0451ad1c36124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.400877 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjj2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05229a97-6cb6-4842-9ec3-f68831b2daf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc3032e8f610d4cc9daa6be30c39f50c5d4bd6f22253126a48566e8e3ef40af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdnq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjj2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.418123 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjgzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25c18b2-98b7-4c40-a059-08f4821dea99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjgzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.425876 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-txvvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afa3059b-1744-4855-ab93-3133529920d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptppk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-txvvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.439425 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a65da2-1f6c-4d8c-9235-319e35ed53e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://a5e4699670d62181c1fafae8281271f7dd7e3a3694a21aa85a0431dc61994c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d33cb163457c854b355765916b3c29d258a9b0db805a51c89bd221aba35fb12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8c37e3585615ba4ff1e0e7d348bf306b89181474b72aebe5290f9cf2a9c706d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1208 19:30:12.581927 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 19:30:12.582093 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 19:30:12.582975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1705152817/tls.crt::/tmp/serving-cert-1705152817/tls.key\\\\\\\"\\\\nI1208 19:30:13.192261 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 19:30:13.193899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 19:30:13.193911 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 19:30:13.193933 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 19:30:13.193938 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 19:30:13.196934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 19:30:13.196955 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 19:30:13.196960 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.196966 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.196970 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 19:30:13.196973 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 19:30:13.196975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 19:30:13.196978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 19:30:13.198675 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://be7cc8d52376599fa6e20ccc45f43544f765f5d0ca901360045e14c3441a4c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.451562 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.490400 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.490444 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.490455 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.490469 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.490481 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:45Z","lastTransitionTime":"2025-12-08T19:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.593069 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.593111 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.593121 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.593135 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.593145 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:45Z","lastTransitionTime":"2025-12-08T19:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.694837 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.694881 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.694891 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.694907 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.694917 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:45Z","lastTransitionTime":"2025-12-08T19:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.767699 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:45 crc kubenswrapper[5125]: E1208 19:30:45.767875 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.796424 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.796468 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.796484 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.796502 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.796513 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:45Z","lastTransitionTime":"2025-12-08T19:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.898887 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.898931 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.898942 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.898961 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:45 crc kubenswrapper[5125]: I1208 19:30:45.898972 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:45Z","lastTransitionTime":"2025-12-08T19:30:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.001668 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.001731 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.001749 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.001776 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.001795 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:46Z","lastTransitionTime":"2025-12-08T19:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.105006 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.105086 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.105114 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.105147 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.105174 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:46Z","lastTransitionTime":"2025-12-08T19:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.158888 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" event={"ID":"aabf1825-0c19-45de-9f9e-fe94777752e6","Type":"ContainerStarted","Data":"6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa"} Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.158930 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" event={"ID":"aabf1825-0c19-45de-9f9e-fe94777752e6","Type":"ContainerStarted","Data":"851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3"} Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.158941 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" event={"ID":"aabf1825-0c19-45de-9f9e-fe94777752e6","Type":"ContainerStarted","Data":"3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea"} Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.158951 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" event={"ID":"aabf1825-0c19-45de-9f9e-fe94777752e6","Type":"ContainerStarted","Data":"36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9"} Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.158963 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" event={"ID":"aabf1825-0c19-45de-9f9e-fe94777752e6","Type":"ContainerStarted","Data":"f2f2e6b44b7da40680601e09cfc2ac282135d38bd2cc2a03bdbacfafbc77cebe"} Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.158973 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" event={"ID":"aabf1825-0c19-45de-9f9e-fe94777752e6","Type":"ContainerStarted","Data":"9792ded106488269b52844056dd1b2e9d47a61d8fc8ac11b8e875d095bdcf100"} Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.160341 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9p7g8" event={"ID":"b938d768-ccce-45a6-a982-3f5d6f1a7d98","Type":"ContainerStarted","Data":"eeb6fe61b3247454c6b9d9e1e48175ecc5e5ad0e231b045d3a5f6ac83cef9e81"} Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.188934 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.198223 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7lwbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a677937-278d-4989-b196-40d5daba436d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8qzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8qzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7lwbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.208123 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.208182 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.208199 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.208221 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.208240 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:46Z","lastTransitionTime":"2025-12-08T19:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.210197 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d0e864-6620-4a75-baa4-8653836f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-w8mbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.227178 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd8c208-b235-420d-aa03-61fb487f40bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45dfdf1c59b5fb6c4c2329c90a050ab925412e0e70f48b865bbd4261ba6cf841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://df8ae2ed1ee6f83e167f23dd7edc5eaf5e881de6ea7d042f3d4184090b0cf6be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb9c33205053ee254860f931fb8051f331e26827a53bee03ec0451ad1c36124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.240430 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjj2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05229a97-6cb6-4842-9ec3-f68831b2daf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc3032e8f610d4cc9daa6be30c39f50c5d4bd6f22253126a48566e8e3ef40af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdnq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjj2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.258994 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjgzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25c18b2-98b7-4c40-a059-08f4821dea99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjgzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.272232 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-txvvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afa3059b-1744-4855-ab93-3133529920d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptppk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-txvvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.287217 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a65da2-1f6c-4d8c-9235-319e35ed53e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://a5e4699670d62181c1fafae8281271f7dd7e3a3694a21aa85a0431dc61994c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d33cb163457c854b355765916b3c29d258a9b0db805a51c89bd221aba35fb12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8c37e3585615ba4ff1e0e7d348bf306b89181474b72aebe5290f9cf2a9c706d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1208 19:30:12.581927 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 19:30:12.582093 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 19:30:12.582975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1705152817/tls.crt::/tmp/serving-cert-1705152817/tls.key\\\\\\\"\\\\nI1208 19:30:13.192261 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 19:30:13.193899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 19:30:13.193911 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 19:30:13.193933 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 19:30:13.193938 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 19:30:13.196934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 19:30:13.196955 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 19:30:13.196960 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.196966 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.196970 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 19:30:13.196973 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 19:30:13.196975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 19:30:13.196978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 19:30:13.198675 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://be7cc8d52376599fa6e20ccc45f43544f765f5d0ca901360045e14c3441a4c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.302847 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.310104 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.310152 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.310166 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.310185 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.310198 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:46Z","lastTransitionTime":"2025-12-08T19:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.321696 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a16dd26-4f2d-422b-a3e7-459ca70d7925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://e9ed6b4f2152ebdc1484f71e24ba072cbf2b01f9d9feba86cfb7389754fdec5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://dffc632ffcdfed24afccbe6a28e61941232e1cd2efcbafd1f092ab148c0c1697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1b8499c0a2bf34333f40c474c394b71a76350a7fc194553cf807f2d5faa889c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd518b12329a228d3ba235314af632769596b1ca8a854f2caf622b9c3847816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8976fcbc73296c5af4cb1d7b4056d864b7d2cae6c8b19dc656ba85a228d2d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.331135 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.339259 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.352106 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.370198 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aabf1825-0c19-45de-9f9e-fe94777752e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k9whn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.378437 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8cea827-b8e3-4d92-adea-df0afd2397da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c9bz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c9bz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slhjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.388956 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9p7g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b938d768-ccce-45a6-a982-3f5d6f1a7d98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://eeb6fe61b3247454c6b9d9e1e48175ecc5e5ad0e231b045d3a5f6ac83cef9e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzwqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p7g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.397864 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2309c211-00a6-48e5-b99d-349b71a11862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caca8af5e19887a7e6708058ea051494b18a37f74e2c31cc984ee9e38f34a397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.407002 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7be318f-1e5a-4c9b-aff6-a0d7423fb520\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://51dd4ebaac488ab269d08cb3c6bd1ab70695582228b86f0ee98bcf2efe730911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d1a6ee7cc39cbce21b5d44e71db4af1388154261b0f4e46bf80a1c6aace1d18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6be3cefe94889f1e79893ae2e0cbc2c0e19b158c8b5d1fc78c2396198cdf1b63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b524051750cb775841e22d8cd5239926fb9dbb19325e7c8e9d0593caeab1da19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.412561 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.412630 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.412645 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.412661 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.412674 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:46Z","lastTransitionTime":"2025-12-08T19:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.416481 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.514940 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.514987 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.515004 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.515020 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.515032 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:46Z","lastTransitionTime":"2025-12-08T19:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.617273 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.617351 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.617372 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.617397 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.617417 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:46Z","lastTransitionTime":"2025-12-08T19:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.719725 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.719774 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.719789 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.719811 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.719824 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:46Z","lastTransitionTime":"2025-12-08T19:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.766782 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.767031 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:30:46 crc kubenswrapper[5125]: E1208 19:30:46.767183 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7lwbz" podUID="9a677937-278d-4989-b196-40d5daba436d" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.767475 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:46 crc kubenswrapper[5125]: E1208 19:30:46.767684 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:46 crc kubenswrapper[5125]: E1208 19:30:46.767720 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.822058 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.822125 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.822146 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.822169 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.822191 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:46Z","lastTransitionTime":"2025-12-08T19:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.924310 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.924387 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.924403 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.924426 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:46 crc kubenswrapper[5125]: I1208 19:30:46.924442 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:46Z","lastTransitionTime":"2025-12-08T19:30:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.028447 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.028500 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.028514 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.028531 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.028544 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:47Z","lastTransitionTime":"2025-12-08T19:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.130676 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.130724 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.130734 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.130748 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.130757 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:47Z","lastTransitionTime":"2025-12-08T19:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.167421 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" event={"ID":"d8cea827-b8e3-4d92-adea-df0afd2397da","Type":"ContainerStarted","Data":"4429148754b6dfe66ea0f2dc216053dc1461a44db146fe6fd6b58eb9b7aa9462"} Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.167514 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" event={"ID":"d8cea827-b8e3-4d92-adea-df0afd2397da","Type":"ContainerStarted","Data":"a86a0816bac7ca3fa402c6544237e9e92be21df715faf34c0d65ab20b3280854"} Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.181476 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd8c208-b235-420d-aa03-61fb487f40bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45dfdf1c59b5fb6c4c2329c90a050ab925412e0e70f48b865bbd4261ba6cf841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://df8ae2ed1ee6f83e167f23dd7edc5eaf5e881de6ea7d042f3d4184090b0cf6be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb9c33205053ee254860f931fb8051f331e26827a53bee03ec0451ad1c36124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.193805 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjj2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05229a97-6cb6-4842-9ec3-f68831b2daf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc3032e8f610d4cc9daa6be30c39f50c5d4bd6f22253126a48566e8e3ef40af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdnq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjj2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.219798 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjgzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25c18b2-98b7-4c40-a059-08f4821dea99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjgzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.231286 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-txvvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afa3059b-1744-4855-ab93-3133529920d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptppk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-txvvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.232437 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.232503 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.232529 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.232560 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.232584 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:47Z","lastTransitionTime":"2025-12-08T19:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.243715 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a65da2-1f6c-4d8c-9235-319e35ed53e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://a5e4699670d62181c1fafae8281271f7dd7e3a3694a21aa85a0431dc61994c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d33cb163457c854b355765916b3c29d258a9b0db805a51c89bd221aba35fb12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8c37e3585615ba4ff1e0e7d348bf306b89181474b72aebe5290f9cf2a9c706d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1208 19:30:12.581927 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 19:30:12.582093 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 19:30:12.582975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1705152817/tls.crt::/tmp/serving-cert-1705152817/tls.key\\\\\\\"\\\\nI1208 19:30:13.192261 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 19:30:13.193899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 19:30:13.193911 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 19:30:13.193933 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 19:30:13.193938 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 19:30:13.196934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 19:30:13.196955 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 19:30:13.196960 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.196966 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.196970 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 19:30:13.196973 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 19:30:13.196975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 19:30:13.196978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 19:30:13.198675 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://be7cc8d52376599fa6e20ccc45f43544f765f5d0ca901360045e14c3441a4c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.253947 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.279094 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a16dd26-4f2d-422b-a3e7-459ca70d7925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://e9ed6b4f2152ebdc1484f71e24ba072cbf2b01f9d9feba86cfb7389754fdec5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://dffc632ffcdfed24afccbe6a28e61941232e1cd2efcbafd1f092ab148c0c1697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1b8499c0a2bf34333f40c474c394b71a76350a7fc194553cf807f2d5faa889c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd518b12329a228d3ba235314af632769596b1ca8a854f2caf622b9c3847816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8976fcbc73296c5af4cb1d7b4056d864b7d2cae6c8b19dc656ba85a228d2d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.293071 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.304235 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.317860 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.335411 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.335705 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.335723 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.335744 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.335758 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:47Z","lastTransitionTime":"2025-12-08T19:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.344941 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aabf1825-0c19-45de-9f9e-fe94777752e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k9whn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.354427 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8cea827-b8e3-4d92-adea-df0afd2397da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4429148754b6dfe66ea0f2dc216053dc1461a44db146fe6fd6b58eb9b7aa9462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c9bz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a86a0816bac7ca3fa402c6544237e9e92be21df715faf34c0d65ab20b3280854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c9bz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slhjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.364480 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9p7g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b938d768-ccce-45a6-a982-3f5d6f1a7d98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://eeb6fe61b3247454c6b9d9e1e48175ecc5e5ad0e231b045d3a5f6ac83cef9e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzwqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p7g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.371276 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2309c211-00a6-48e5-b99d-349b71a11862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caca8af5e19887a7e6708058ea051494b18a37f74e2c31cc984ee9e38f34a397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.381973 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7be318f-1e5a-4c9b-aff6-a0d7423fb520\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://51dd4ebaac488ab269d08cb3c6bd1ab70695582228b86f0ee98bcf2efe730911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d1a6ee7cc39cbce21b5d44e71db4af1388154261b0f4e46bf80a1c6aace1d18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6be3cefe94889f1e79893ae2e0cbc2c0e19b158c8b5d1fc78c2396198cdf1b63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b524051750cb775841e22d8cd5239926fb9dbb19325e7c8e9d0593caeab1da19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.393550 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.403082 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.409880 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7lwbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a677937-278d-4989-b196-40d5daba436d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8qzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8qzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7lwbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.420123 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d0e864-6620-4a75-baa4-8653836f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-w8mbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.437765 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.437811 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.437823 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.437840 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.437852 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:47Z","lastTransitionTime":"2025-12-08T19:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.540077 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.540123 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.540135 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.540151 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.540163 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:47Z","lastTransitionTime":"2025-12-08T19:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.642813 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.643072 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.643086 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.643103 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.643115 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:47Z","lastTransitionTime":"2025-12-08T19:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.745348 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.745419 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.745435 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.745457 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.745478 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:47Z","lastTransitionTime":"2025-12-08T19:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.767836 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:47 crc kubenswrapper[5125]: E1208 19:30:47.768013 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.847950 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.848039 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.848062 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.848091 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.848112 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:47Z","lastTransitionTime":"2025-12-08T19:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.950061 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.950100 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.950109 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.950123 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:47 crc kubenswrapper[5125]: I1208 19:30:47.950132 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:47Z","lastTransitionTime":"2025-12-08T19:30:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.052759 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.052814 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.052832 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.052855 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.052873 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:48Z","lastTransitionTime":"2025-12-08T19:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.155204 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.155258 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.155273 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.155291 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.155302 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:48Z","lastTransitionTime":"2025-12-08T19:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.173078 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjgzs" event={"ID":"e25c18b2-98b7-4c40-a059-08f4821dea99","Type":"ContainerStarted","Data":"74c564c09c3adecc6a6547613a4488d2a69cbe49f9a01e5d2f473060d445e944"} Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.178480 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" event={"ID":"aabf1825-0c19-45de-9f9e-fe94777752e6","Type":"ContainerStarted","Data":"b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1"} Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.183779 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-txvvl" event={"ID":"afa3059b-1744-4855-ab93-3133529920d5","Type":"ContainerStarted","Data":"e6d9b8abd3901dcbc648fce9588b35aa32a0f1f8ee0080bebd443fbdcde01141"} Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.188541 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.198595 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7lwbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a677937-278d-4989-b196-40d5daba436d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8qzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8qzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7lwbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.209085 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d0e864-6620-4a75-baa4-8653836f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-w8mbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.220918 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd8c208-b235-420d-aa03-61fb487f40bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45dfdf1c59b5fb6c4c2329c90a050ab925412e0e70f48b865bbd4261ba6cf841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://df8ae2ed1ee6f83e167f23dd7edc5eaf5e881de6ea7d042f3d4184090b0cf6be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb9c33205053ee254860f931fb8051f331e26827a53bee03ec0451ad1c36124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.231290 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjj2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05229a97-6cb6-4842-9ec3-f68831b2daf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc3032e8f610d4cc9daa6be30c39f50c5d4bd6f22253126a48566e8e3ef40af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdnq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjj2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.244363 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjgzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25c18b2-98b7-4c40-a059-08f4821dea99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c564c09c3adecc6a6547613a4488d2a69cbe49f9a01e5d2f473060d445e944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjgzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.250806 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-txvvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afa3059b-1744-4855-ab93-3133529920d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptppk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-txvvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.257985 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.258036 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.258046 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.258060 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.258071 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:48Z","lastTransitionTime":"2025-12-08T19:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.262656 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a65da2-1f6c-4d8c-9235-319e35ed53e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://a5e4699670d62181c1fafae8281271f7dd7e3a3694a21aa85a0431dc61994c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d33cb163457c854b355765916b3c29d258a9b0db805a51c89bd221aba35fb12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8c37e3585615ba4ff1e0e7d348bf306b89181474b72aebe5290f9cf2a9c706d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1208 19:30:12.581927 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 19:30:12.582093 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 19:30:12.582975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1705152817/tls.crt::/tmp/serving-cert-1705152817/tls.key\\\\\\\"\\\\nI1208 19:30:13.192261 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 19:30:13.193899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 19:30:13.193911 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 19:30:13.193933 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 19:30:13.193938 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 19:30:13.196934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 19:30:13.196955 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 19:30:13.196960 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.196966 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.196970 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 19:30:13.196973 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 19:30:13.196975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 19:30:13.196978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 19:30:13.198675 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://be7cc8d52376599fa6e20ccc45f43544f765f5d0ca901360045e14c3441a4c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.274968 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.297435 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a16dd26-4f2d-422b-a3e7-459ca70d7925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://e9ed6b4f2152ebdc1484f71e24ba072cbf2b01f9d9feba86cfb7389754fdec5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://dffc632ffcdfed24afccbe6a28e61941232e1cd2efcbafd1f092ab148c0c1697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1b8499c0a2bf34333f40c474c394b71a76350a7fc194553cf807f2d5faa889c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd518b12329a228d3ba235314af632769596b1ca8a854f2caf622b9c3847816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8976fcbc73296c5af4cb1d7b4056d864b7d2cae6c8b19dc656ba85a228d2d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.309288 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.319244 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.331342 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.348541 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aabf1825-0c19-45de-9f9e-fe94777752e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k9whn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.357719 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8cea827-b8e3-4d92-adea-df0afd2397da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4429148754b6dfe66ea0f2dc216053dc1461a44db146fe6fd6b58eb9b7aa9462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c9bz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a86a0816bac7ca3fa402c6544237e9e92be21df715faf34c0d65ab20b3280854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c9bz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slhjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.360330 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.360367 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.360376 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.360390 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.360400 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:48Z","lastTransitionTime":"2025-12-08T19:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.369894 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9p7g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b938d768-ccce-45a6-a982-3f5d6f1a7d98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://eeb6fe61b3247454c6b9d9e1e48175ecc5e5ad0e231b045d3a5f6ac83cef9e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzwqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p7g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.378813 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2309c211-00a6-48e5-b99d-349b71a11862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caca8af5e19887a7e6708058ea051494b18a37f74e2c31cc984ee9e38f34a397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.388966 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7be318f-1e5a-4c9b-aff6-a0d7423fb520\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://51dd4ebaac488ab269d08cb3c6bd1ab70695582228b86f0ee98bcf2efe730911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d1a6ee7cc39cbce21b5d44e71db4af1388154261b0f4e46bf80a1c6aace1d18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6be3cefe94889f1e79893ae2e0cbc2c0e19b158c8b5d1fc78c2396198cdf1b63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b524051750cb775841e22d8cd5239926fb9dbb19325e7c8e9d0593caeab1da19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.396912 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.415378 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a16dd26-4f2d-422b-a3e7-459ca70d7925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://e9ed6b4f2152ebdc1484f71e24ba072cbf2b01f9d9feba86cfb7389754fdec5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://dffc632ffcdfed24afccbe6a28e61941232e1cd2efcbafd1f092ab148c0c1697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1b8499c0a2bf34333f40c474c394b71a76350a7fc194553cf807f2d5faa889c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd518b12329a228d3ba235314af632769596b1ca8a854f2caf622b9c3847816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8976fcbc73296c5af4cb1d7b4056d864b7d2cae6c8b19dc656ba85a228d2d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.424487 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.433514 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.441861 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.456233 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aabf1825-0c19-45de-9f9e-fe94777752e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k9whn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.462288 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.462325 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.462338 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.462354 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.462367 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:48Z","lastTransitionTime":"2025-12-08T19:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.464799 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8cea827-b8e3-4d92-adea-df0afd2397da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4429148754b6dfe66ea0f2dc216053dc1461a44db146fe6fd6b58eb9b7aa9462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c9bz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a86a0816bac7ca3fa402c6544237e9e92be21df715faf34c0d65ab20b3280854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c9bz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slhjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.473882 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9p7g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b938d768-ccce-45a6-a982-3f5d6f1a7d98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://eeb6fe61b3247454c6b9d9e1e48175ecc5e5ad0e231b045d3a5f6ac83cef9e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzwqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p7g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.480705 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2309c211-00a6-48e5-b99d-349b71a11862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caca8af5e19887a7e6708058ea051494b18a37f74e2c31cc984ee9e38f34a397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.490136 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7be318f-1e5a-4c9b-aff6-a0d7423fb520\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://51dd4ebaac488ab269d08cb3c6bd1ab70695582228b86f0ee98bcf2efe730911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d1a6ee7cc39cbce21b5d44e71db4af1388154261b0f4e46bf80a1c6aace1d18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6be3cefe94889f1e79893ae2e0cbc2c0e19b158c8b5d1fc78c2396198cdf1b63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b524051750cb775841e22d8cd5239926fb9dbb19325e7c8e9d0593caeab1da19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.498071 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.505452 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.512820 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7lwbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a677937-278d-4989-b196-40d5daba436d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8qzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8qzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7lwbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.521122 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d0e864-6620-4a75-baa4-8653836f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-w8mbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.529353 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd8c208-b235-420d-aa03-61fb487f40bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45dfdf1c59b5fb6c4c2329c90a050ab925412e0e70f48b865bbd4261ba6cf841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://df8ae2ed1ee6f83e167f23dd7edc5eaf5e881de6ea7d042f3d4184090b0cf6be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb9c33205053ee254860f931fb8051f331e26827a53bee03ec0451ad1c36124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.536169 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjj2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05229a97-6cb6-4842-9ec3-f68831b2daf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc3032e8f610d4cc9daa6be30c39f50c5d4bd6f22253126a48566e8e3ef40af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdnq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjj2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.547125 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjgzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25c18b2-98b7-4c40-a059-08f4821dea99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c564c09c3adecc6a6547613a4488d2a69cbe49f9a01e5d2f473060d445e944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjgzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.554938 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-txvvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afa3059b-1744-4855-ab93-3133529920d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://e6d9b8abd3901dcbc648fce9588b35aa32a0f1f8ee0080bebd443fbdcde01141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptppk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-txvvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.564710 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.564766 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.564786 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.564812 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.564832 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:48Z","lastTransitionTime":"2025-12-08T19:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.570683 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a65da2-1f6c-4d8c-9235-319e35ed53e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://a5e4699670d62181c1fafae8281271f7dd7e3a3694a21aa85a0431dc61994c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d33cb163457c854b355765916b3c29d258a9b0db805a51c89bd221aba35fb12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8c37e3585615ba4ff1e0e7d348bf306b89181474b72aebe5290f9cf2a9c706d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1208 19:30:12.581927 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 19:30:12.582093 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 19:30:12.582975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1705152817/tls.crt::/tmp/serving-cert-1705152817/tls.key\\\\\\\"\\\\nI1208 19:30:13.192261 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 19:30:13.193899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 19:30:13.193911 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 19:30:13.193933 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 19:30:13.193938 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 19:30:13.196934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 19:30:13.196955 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 19:30:13.196960 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.196966 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.196970 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 19:30:13.196973 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 19:30:13.196975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 19:30:13.196978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 19:30:13.198675 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://be7cc8d52376599fa6e20ccc45f43544f765f5d0ca901360045e14c3441a4c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.586771 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.666833 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.666887 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.666905 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.666931 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.666949 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:48Z","lastTransitionTime":"2025-12-08T19:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.767517 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.767633 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.767804 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:48 crc kubenswrapper[5125]: E1208 19:30:48.767790 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7lwbz" podUID="9a677937-278d-4989-b196-40d5daba436d" Dec 08 19:30:48 crc kubenswrapper[5125]: E1208 19:30:48.768006 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:48 crc kubenswrapper[5125]: E1208 19:30:48.768314 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.771463 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.771514 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.771532 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.771552 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.771568 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:48Z","lastTransitionTime":"2025-12-08T19:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.873818 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.873875 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.873890 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.873918 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.873930 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:48Z","lastTransitionTime":"2025-12-08T19:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.976157 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.976211 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.976225 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.976241 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:48 crc kubenswrapper[5125]: I1208 19:30:48.976255 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:48Z","lastTransitionTime":"2025-12-08T19:30:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.078424 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.078760 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.078770 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.078784 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.078794 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:49Z","lastTransitionTime":"2025-12-08T19:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.182389 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.182457 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.182482 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.182512 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.182535 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:49Z","lastTransitionTime":"2025-12-08T19:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.188814 5125 generic.go:358] "Generic (PLEG): container finished" podID="e25c18b2-98b7-4c40-a059-08f4821dea99" containerID="74c564c09c3adecc6a6547613a4488d2a69cbe49f9a01e5d2f473060d445e944" exitCode=0 Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.188954 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjgzs" event={"ID":"e25c18b2-98b7-4c40-a059-08f4821dea99","Type":"ContainerDied","Data":"74c564c09c3adecc6a6547613a4488d2a69cbe49f9a01e5d2f473060d445e944"} Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.191272 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"fb648de1f8795d5d32e57abbd9aee1fd17700b016c32a33f9fce06ff8d1ad1f4"} Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.203959 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a65da2-1f6c-4d8c-9235-319e35ed53e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://a5e4699670d62181c1fafae8281271f7dd7e3a3694a21aa85a0431dc61994c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d33cb163457c854b355765916b3c29d258a9b0db805a51c89bd221aba35fb12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8c37e3585615ba4ff1e0e7d348bf306b89181474b72aebe5290f9cf2a9c706d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1208 19:30:12.581927 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 19:30:12.582093 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 19:30:12.582975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1705152817/tls.crt::/tmp/serving-cert-1705152817/tls.key\\\\\\\"\\\\nI1208 19:30:13.192261 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 19:30:13.193899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 19:30:13.193911 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 19:30:13.193933 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 19:30:13.193938 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 19:30:13.196934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 19:30:13.196955 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 19:30:13.196960 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.196966 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.196970 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 19:30:13.196973 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 19:30:13.196975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 19:30:13.196978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 19:30:13.198675 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://be7cc8d52376599fa6e20ccc45f43544f765f5d0ca901360045e14c3441a4c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.219111 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.237624 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a16dd26-4f2d-422b-a3e7-459ca70d7925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://e9ed6b4f2152ebdc1484f71e24ba072cbf2b01f9d9feba86cfb7389754fdec5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://dffc632ffcdfed24afccbe6a28e61941232e1cd2efcbafd1f092ab148c0c1697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1b8499c0a2bf34333f40c474c394b71a76350a7fc194553cf807f2d5faa889c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd518b12329a228d3ba235314af632769596b1ca8a854f2caf622b9c3847816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8976fcbc73296c5af4cb1d7b4056d864b7d2cae6c8b19dc656ba85a228d2d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.249460 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.259851 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.276991 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.284650 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.284696 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.284709 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.284726 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.284737 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:49Z","lastTransitionTime":"2025-12-08T19:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.299331 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aabf1825-0c19-45de-9f9e-fe94777752e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k9whn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.313102 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8cea827-b8e3-4d92-adea-df0afd2397da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4429148754b6dfe66ea0f2dc216053dc1461a44db146fe6fd6b58eb9b7aa9462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c9bz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a86a0816bac7ca3fa402c6544237e9e92be21df715faf34c0d65ab20b3280854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c9bz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slhjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.324422 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9p7g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b938d768-ccce-45a6-a982-3f5d6f1a7d98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://eeb6fe61b3247454c6b9d9e1e48175ecc5e5ad0e231b045d3a5f6ac83cef9e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzwqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p7g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.333063 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2309c211-00a6-48e5-b99d-349b71a11862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caca8af5e19887a7e6708058ea051494b18a37f74e2c31cc984ee9e38f34a397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.345193 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7be318f-1e5a-4c9b-aff6-a0d7423fb520\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://51dd4ebaac488ab269d08cb3c6bd1ab70695582228b86f0ee98bcf2efe730911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d1a6ee7cc39cbce21b5d44e71db4af1388154261b0f4e46bf80a1c6aace1d18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6be3cefe94889f1e79893ae2e0cbc2c0e19b158c8b5d1fc78c2396198cdf1b63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b524051750cb775841e22d8cd5239926fb9dbb19325e7c8e9d0593caeab1da19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.355281 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.365219 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.375556 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7lwbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a677937-278d-4989-b196-40d5daba436d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8qzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8qzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7lwbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.385568 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d0e864-6620-4a75-baa4-8653836f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-w8mbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.388186 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.388259 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.388284 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.388313 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.388336 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:49Z","lastTransitionTime":"2025-12-08T19:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.394952 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd8c208-b235-420d-aa03-61fb487f40bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45dfdf1c59b5fb6c4c2329c90a050ab925412e0e70f48b865bbd4261ba6cf841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://df8ae2ed1ee6f83e167f23dd7edc5eaf5e881de6ea7d042f3d4184090b0cf6be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb9c33205053ee254860f931fb8051f331e26827a53bee03ec0451ad1c36124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.404722 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjj2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05229a97-6cb6-4842-9ec3-f68831b2daf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc3032e8f610d4cc9daa6be30c39f50c5d4bd6f22253126a48566e8e3ef40af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdnq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjj2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.417054 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjgzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25c18b2-98b7-4c40-a059-08f4821dea99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c564c09c3adecc6a6547613a4488d2a69cbe49f9a01e5d2f473060d445e944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74c564c09c3adecc6a6547613a4488d2a69cbe49f9a01e5d2f473060d445e944\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjgzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.425136 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-txvvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afa3059b-1744-4855-ab93-3133529920d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://e6d9b8abd3901dcbc648fce9588b35aa32a0f1f8ee0080bebd443fbdcde01141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptppk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-txvvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.432509 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d0e864-6620-4a75-baa4-8653836f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-w8mbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.443003 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd8c208-b235-420d-aa03-61fb487f40bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45dfdf1c59b5fb6c4c2329c90a050ab925412e0e70f48b865bbd4261ba6cf841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://df8ae2ed1ee6f83e167f23dd7edc5eaf5e881de6ea7d042f3d4184090b0cf6be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb9c33205053ee254860f931fb8051f331e26827a53bee03ec0451ad1c36124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.451746 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjj2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05229a97-6cb6-4842-9ec3-f68831b2daf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc3032e8f610d4cc9daa6be30c39f50c5d4bd6f22253126a48566e8e3ef40af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdnq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjj2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.468479 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjgzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25c18b2-98b7-4c40-a059-08f4821dea99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c564c09c3adecc6a6547613a4488d2a69cbe49f9a01e5d2f473060d445e944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74c564c09c3adecc6a6547613a4488d2a69cbe49f9a01e5d2f473060d445e944\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjgzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.476119 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-txvvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afa3059b-1744-4855-ab93-3133529920d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://e6d9b8abd3901dcbc648fce9588b35aa32a0f1f8ee0080bebd443fbdcde01141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptppk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-txvvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.486463 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a65da2-1f6c-4d8c-9235-319e35ed53e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://a5e4699670d62181c1fafae8281271f7dd7e3a3694a21aa85a0431dc61994c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d33cb163457c854b355765916b3c29d258a9b0db805a51c89bd221aba35fb12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8c37e3585615ba4ff1e0e7d348bf306b89181474b72aebe5290f9cf2a9c706d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1208 19:30:12.581927 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 19:30:12.582093 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 19:30:12.582975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1705152817/tls.crt::/tmp/serving-cert-1705152817/tls.key\\\\\\\"\\\\nI1208 19:30:13.192261 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 19:30:13.193899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 19:30:13.193911 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 19:30:13.193933 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 19:30:13.193938 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 19:30:13.196934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 19:30:13.196955 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 19:30:13.196960 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.196966 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.196970 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 19:30:13.196973 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 19:30:13.196975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 19:30:13.196978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 19:30:13.198675 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://be7cc8d52376599fa6e20ccc45f43544f765f5d0ca901360045e14c3441a4c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.490023 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.490060 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.490072 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.490086 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.490096 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:49Z","lastTransitionTime":"2025-12-08T19:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.496660 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fb648de1f8795d5d32e57abbd9aee1fd17700b016c32a33f9fce06ff8d1ad1f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.514831 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a16dd26-4f2d-422b-a3e7-459ca70d7925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://e9ed6b4f2152ebdc1484f71e24ba072cbf2b01f9d9feba86cfb7389754fdec5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://dffc632ffcdfed24afccbe6a28e61941232e1cd2efcbafd1f092ab148c0c1697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1b8499c0a2bf34333f40c474c394b71a76350a7fc194553cf807f2d5faa889c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd518b12329a228d3ba235314af632769596b1ca8a854f2caf622b9c3847816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8976fcbc73296c5af4cb1d7b4056d864b7d2cae6c8b19dc656ba85a228d2d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.524657 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.534732 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.543678 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.565799 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aabf1825-0c19-45de-9f9e-fe94777752e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k9whn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.575651 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8cea827-b8e3-4d92-adea-df0afd2397da\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4429148754b6dfe66ea0f2dc216053dc1461a44db146fe6fd6b58eb9b7aa9462\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c9bz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a86a0816bac7ca3fa402c6544237e9e92be21df715faf34c0d65ab20b3280854\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:46Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4c9bz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slhjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.586449 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9p7g8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b938d768-ccce-45a6-a982-3f5d6f1a7d98\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://eeb6fe61b3247454c6b9d9e1e48175ecc5e5ad0e231b045d3a5f6ac83cef9e81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:45Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nzwqc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p7g8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.591893 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.591923 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.591932 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.591945 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.591954 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:49Z","lastTransitionTime":"2025-12-08T19:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.596909 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2309c211-00a6-48e5-b99d-349b71a11862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caca8af5e19887a7e6708058ea051494b18a37f74e2c31cc984ee9e38f34a397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.606977 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7be318f-1e5a-4c9b-aff6-a0d7423fb520\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://51dd4ebaac488ab269d08cb3c6bd1ab70695582228b86f0ee98bcf2efe730911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d1a6ee7cc39cbce21b5d44e71db4af1388154261b0f4e46bf80a1c6aace1d18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6be3cefe94889f1e79893ae2e0cbc2c0e19b158c8b5d1fc78c2396198cdf1b63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b524051750cb775841e22d8cd5239926fb9dbb19325e7c8e9d0593caeab1da19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.616557 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.626195 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.636027 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7lwbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a677937-278d-4989-b196-40d5daba436d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8qzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8qzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7lwbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.694942 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.694974 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.694982 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.694995 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.695003 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:49Z","lastTransitionTime":"2025-12-08T19:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.771192 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:49 crc kubenswrapper[5125]: E1208 19:30:49.771350 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.809057 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.809117 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.809130 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.809148 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.809160 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:49Z","lastTransitionTime":"2025-12-08T19:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.919151 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.919292 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.919309 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.919328 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:49 crc kubenswrapper[5125]: I1208 19:30:49.919385 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:49Z","lastTransitionTime":"2025-12-08T19:30:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.029230 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.029274 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.029286 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.029303 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.029314 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:50Z","lastTransitionTime":"2025-12-08T19:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.130763 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.130800 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.130811 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.130827 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.130837 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:50Z","lastTransitionTime":"2025-12-08T19:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.197681 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"bb45891d6fee42b8d3adb80f3d16a5d0e34df0c8a52e2252871db90c97ee8c97"} Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.197940 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"cfbc917138c6f90202a6c5683a681ecd12968763cb61e0d3ac0ce988f09fb632"} Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.201358 5125 generic.go:358] "Generic (PLEG): container finished" podID="e25c18b2-98b7-4c40-a059-08f4821dea99" containerID="9dbda98552fb18d6fa5bce59ac9d523835359fa63a63424607c69aed20a15ca7" exitCode=0 Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.201505 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjgzs" event={"ID":"e25c18b2-98b7-4c40-a059-08f4821dea99","Type":"ContainerDied","Data":"9dbda98552fb18d6fa5bce59ac9d523835359fa63a63424607c69aed20a15ca7"} Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.204303 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" event={"ID":"48d0e864-6620-4a75-baa4-8653836f3aab","Type":"ContainerStarted","Data":"16e1ad7ce234905f668415641ca07de1f1c979cfa934d9f44009b0809d0096a9"} Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.206995 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2309c211-00a6-48e5-b99d-349b71a11862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://caca8af5e19887a7e6708058ea051494b18a37f74e2c31cc984ee9e38f34a397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ae73f2390224331e50911458472acd98c531da0be74f86752901a095a79d8d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.221399 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7be318f-1e5a-4c9b-aff6-a0d7423fb520\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://51dd4ebaac488ab269d08cb3c6bd1ab70695582228b86f0ee98bcf2efe730911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d1a6ee7cc39cbce21b5d44e71db4af1388154261b0f4e46bf80a1c6aace1d18b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6be3cefe94889f1e79893ae2e0cbc2c0e19b158c8b5d1fc78c2396198cdf1b63\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b524051750cb775841e22d8cd5239926fb9dbb19325e7c8e9d0593caeab1da19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:05Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.232707 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.232742 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.232751 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.232763 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.232771 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:50Z","lastTransitionTime":"2025-12-08T19:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.241280 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.256392 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bb45891d6fee42b8d3adb80f3d16a5d0e34df0c8a52e2252871db90c97ee8c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0,1000500000],\\\"uid\\\":1000500000}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://cfbc917138c6f90202a6c5683a681ecd12968763cb61e0d3ac0ce988f09fb632\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0,1000500000],\\\"uid\\\":1000500000}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.264468 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7lwbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a677937-278d-4989-b196-40d5daba436d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8qzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f8qzs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7lwbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.274279 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48d0e864-6620-4a75-baa4-8653836f3aab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twvrb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-w8mbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.289285 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd8c208-b235-420d-aa03-61fb487f40bc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://45dfdf1c59b5fb6c4c2329c90a050ab925412e0e70f48b865bbd4261ba6cf841\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://df8ae2ed1ee6f83e167f23dd7edc5eaf5e881de6ea7d042f3d4184090b0cf6be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7eb9c33205053ee254860f931fb8051f331e26827a53bee03ec0451ad1c36124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d298f37a316c5a826ff4ee801adab5e87d5796f770ac5d8ce9a7835c6cda52ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.296773 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-jjj2h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05229a97-6cb6-4842-9ec3-f68831b2daf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc3032e8f610d4cc9daa6be30c39f50c5d4bd6f22253126a48566e8e3ef40af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jdnq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-jjj2h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.308803 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjgzs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e25c18b2-98b7-4c40-a059-08f4821dea99\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c564c09c3adecc6a6547613a4488d2a69cbe49f9a01e5d2f473060d445e944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74c564c09c3adecc6a6547613a4488d2a69cbe49f9a01e5d2f473060d445e944\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:30:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmsnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjgzs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.317260 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-txvvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"afa3059b-1744-4855-ab93-3133529920d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://e6d9b8abd3901dcbc648fce9588b35aa32a0f1f8ee0080bebd443fbdcde01141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ptppk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-txvvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.329436 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a65da2-1f6c-4d8c-9235-319e35ed53e6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://a5e4699670d62181c1fafae8281271f7dd7e3a3694a21aa85a0431dc61994c3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6d33cb163457c854b355765916b3c29d258a9b0db805a51c89bd221aba35fb12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8c37e3585615ba4ff1e0e7d348bf306b89181474b72aebe5290f9cf2a9c706d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T19:30:13Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InOrderInformers\\\\\\\" enabled=true\\\\nW1208 19:30:12.581927 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 19:30:12.582093 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 19:30:12.582975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1705152817/tls.crt::/tmp/serving-cert-1705152817/tls.key\\\\\\\"\\\\nI1208 19:30:13.192261 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 19:30:13.193899 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 19:30:13.193911 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 19:30:13.193933 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 19:30:13.193938 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 19:30:13.196934 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 19:30:13.196955 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 19:30:13.196960 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.196966 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 19:30:13.196970 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 19:30:13.196973 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 19:30:13.196975 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 19:30:13.196978 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 19:30:13.198675 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:11Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://be7cc8d52376599fa6e20ccc45f43544f765f5d0ca901360045e14c3441a4c05\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.335694 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.335738 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.335754 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.335774 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.335789 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:50Z","lastTransitionTime":"2025-12-08T19:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.341599 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fb648de1f8795d5d32e57abbd9aee1fd17700b016c32a33f9fce06ff8d1ad1f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:30:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.360040 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a16dd26-4f2d-422b-a3e7-459ca70d7925\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:29:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://e9ed6b4f2152ebdc1484f71e24ba072cbf2b01f9d9feba86cfb7389754fdec5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://dffc632ffcdfed24afccbe6a28e61941232e1cd2efcbafd1f092ab148c0c1697\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1b8499c0a2bf34333f40c474c394b71a76350a7fc194553cf807f2d5faa889c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bd518b12329a228d3ba235314af632769596b1ca8a854f2caf622b9c3847816b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://a8976fcbc73296c5af4cb1d7b4056d864b7d2cae6c8b19dc656ba85a228d2d23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T19:29:08Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c520f68412a2f1ae29f18abb5d8bc664f9252d0dd42c6080ea288256958602f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:04Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d035a35b089a50c4a800eb43846861e14d50add3988134e268f1f5df9428ecb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:06Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6dc9b4104905e96b339df9604e1a9a669c90bb550ac77534255824fe85f3406b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:29:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:29:07Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:29:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.369027 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.378148 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.388810 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.403707 5125 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aabf1825-0c19-45de-9f9e-fe94777752e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T19:30:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T19:30:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T19:30:44Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42xvf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T19:30:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-k9whn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.428174 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" podStartSLOduration=87.428157746 podStartE2EDuration="1m27.428157746s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:50.428131575 +0000 UTC m=+107.198621869" watchObservedRunningTime="2025-12-08 19:30:50.428157746 +0000 UTC m=+107.198648020" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.438031 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.438071 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.438089 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.438113 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.438129 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:50Z","lastTransitionTime":"2025-12-08T19:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.448633 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-9p7g8" podStartSLOduration=87.448593004 podStartE2EDuration="1m27.448593004s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:50.447900356 +0000 UTC m=+107.218390650" watchObservedRunningTime="2025-12-08 19:30:50.448593004 +0000 UTC m=+107.219083278" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.494058 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=30.494040714 podStartE2EDuration="30.494040714s" podCreationTimestamp="2025-12-08 19:30:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:50.48010584 +0000 UTC m=+107.250596124" watchObservedRunningTime="2025-12-08 19:30:50.494040714 +0000 UTC m=+107.264530978" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.539947 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.539990 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.540002 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.540017 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.540028 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:50Z","lastTransitionTime":"2025-12-08T19:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.550438 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=31.550420536 podStartE2EDuration="31.550420536s" podCreationTimestamp="2025-12-08 19:30:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:50.550249241 +0000 UTC m=+107.320739545" watchObservedRunningTime="2025-12-08 19:30:50.550420536 +0000 UTC m=+107.320910810" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.573214 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=30.573195386 podStartE2EDuration="30.573195386s" podCreationTimestamp="2025-12-08 19:30:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:50.562701365 +0000 UTC m=+107.333191669" watchObservedRunningTime="2025-12-08 19:30:50.573195386 +0000 UTC m=+107.343685660" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.632901 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=30.632882528 podStartE2EDuration="30.632882528s" podCreationTimestamp="2025-12-08 19:30:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:50.622257092 +0000 UTC m=+107.392747396" watchObservedRunningTime="2025-12-08 19:30:50.632882528 +0000 UTC m=+107.403372802" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.633134 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-jjj2h" podStartSLOduration=87.633128634 podStartE2EDuration="1m27.633128634s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:50.632310352 +0000 UTC m=+107.402800636" watchObservedRunningTime="2025-12-08 19:30:50.633128634 +0000 UTC m=+107.403618908" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.642700 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.642741 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.642752 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.642767 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.642778 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:50Z","lastTransitionTime":"2025-12-08T19:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.660018 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-txvvl" podStartSLOduration=87.659991854 podStartE2EDuration="1m27.659991854s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:50.658150685 +0000 UTC m=+107.428640969" watchObservedRunningTime="2025-12-08 19:30:50.659991854 +0000 UTC m=+107.430482118" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.745228 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.745513 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.745621 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.745733 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.745818 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:50Z","lastTransitionTime":"2025-12-08T19:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.767215 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.767230 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:30:50 crc kubenswrapper[5125]: E1208 19:30:50.767342 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:50 crc kubenswrapper[5125]: E1208 19:30:50.767417 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7lwbz" podUID="9a677937-278d-4989-b196-40d5daba436d" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.767428 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:50 crc kubenswrapper[5125]: E1208 19:30:50.767708 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.848141 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.848186 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.848198 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.848216 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.848228 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:50Z","lastTransitionTime":"2025-12-08T19:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.950819 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.950884 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.950907 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.950930 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:50 crc kubenswrapper[5125]: I1208 19:30:50.950945 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:50Z","lastTransitionTime":"2025-12-08T19:30:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.053485 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.053551 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.053577 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.053637 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.053665 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:51Z","lastTransitionTime":"2025-12-08T19:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.155821 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.156716 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.156807 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.156834 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.156853 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:51Z","lastTransitionTime":"2025-12-08T19:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.211828 5125 generic.go:358] "Generic (PLEG): container finished" podID="e25c18b2-98b7-4c40-a059-08f4821dea99" containerID="3e3908f4ab71be4701d33195449bca2cfd288a397819f43bd4a7f65522af6f0f" exitCode=0 Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.211939 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjgzs" event={"ID":"e25c18b2-98b7-4c40-a059-08f4821dea99","Type":"ContainerDied","Data":"3e3908f4ab71be4701d33195449bca2cfd288a397819f43bd4a7f65522af6f0f"} Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.221232 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" event={"ID":"aabf1825-0c19-45de-9f9e-fe94777752e6","Type":"ContainerStarted","Data":"7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41"} Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.222843 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.224140 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" event={"ID":"48d0e864-6620-4a75-baa4-8653836f3aab","Type":"ContainerStarted","Data":"b20b0a9605f05d0adc59fb9552e2669c3781c6b2a3e5d64103d79ca5707cf336"} Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.261719 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.261770 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.261782 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.261800 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.261812 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:51Z","lastTransitionTime":"2025-12-08T19:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.267342 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.269652 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" podStartSLOduration=88.269586753 podStartE2EDuration="1m28.269586753s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:51.268130214 +0000 UTC m=+108.038620568" watchObservedRunningTime="2025-12-08 19:30:51.269586753 +0000 UTC m=+108.040077087" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.304749 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" podStartSLOduration=88.304731897 podStartE2EDuration="1m28.304731897s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:51.304004877 +0000 UTC m=+108.074495201" watchObservedRunningTime="2025-12-08 19:30:51.304731897 +0000 UTC m=+108.075222181" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.363698 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.363740 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.363751 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.363766 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.363777 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:51Z","lastTransitionTime":"2025-12-08T19:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.466062 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.466105 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.466117 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.466134 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.466151 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:51Z","lastTransitionTime":"2025-12-08T19:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.568551 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.568645 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.568666 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.568690 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.568709 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:51Z","lastTransitionTime":"2025-12-08T19:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.670449 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.670486 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.670495 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.670525 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.670535 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:51Z","lastTransitionTime":"2025-12-08T19:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.767301 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:51 crc kubenswrapper[5125]: E1208 19:30:51.767496 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.768826 5125 scope.go:117] "RemoveContainer" containerID="346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af" Dec 08 19:30:51 crc kubenswrapper[5125]: E1208 19:30:51.769094 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.772599 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.772659 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.772668 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.772682 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.772691 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:51Z","lastTransitionTime":"2025-12-08T19:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.875071 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.875128 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.875138 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.875153 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.875163 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:51Z","lastTransitionTime":"2025-12-08T19:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.977013 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.977058 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.977071 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.977086 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:51 crc kubenswrapper[5125]: I1208 19:30:51.977096 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:51Z","lastTransitionTime":"2025-12-08T19:30:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.078691 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.078733 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.078743 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.078755 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.078765 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:52Z","lastTransitionTime":"2025-12-08T19:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.168895 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.168941 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.168954 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.168970 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.168982 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:52Z","lastTransitionTime":"2025-12-08T19:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.186492 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.186551 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.186568 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.186592 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.186628 5125 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T19:30:52Z","lastTransitionTime":"2025-12-08T19:30:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.220006 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-2khtx"] Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.232022 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"0ad4a48e2d5fd6a4df4ae05bca44e7e167dffbd601bb025d0928aec429443248"} Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.232138 5125 generic.go:358] "Generic (PLEG): container finished" podID="e25c18b2-98b7-4c40-a059-08f4821dea99" containerID="09b7e470e66b62ae5723b8f612ace1a74e0577edff9e6b3b2113ffd39e41bdb5" exitCode=0 Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.232208 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-2khtx" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.232253 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjgzs" event={"ID":"e25c18b2-98b7-4c40-a059-08f4821dea99","Type":"ContainerDied","Data":"09b7e470e66b62ae5723b8f612ace1a74e0577edff9e6b3b2113ffd39e41bdb5"} Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.233167 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.233191 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.235367 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.238153 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.238181 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.240149 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.258368 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bf54099-9882-45a1-b769-73ad1dd7d70c-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-2khtx\" (UID: \"3bf54099-9882-45a1-b769-73ad1dd7d70c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-2khtx" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.258592 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bf54099-9882-45a1-b769-73ad1dd7d70c-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-2khtx\" (UID: \"3bf54099-9882-45a1-b769-73ad1dd7d70c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-2khtx" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.258723 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3bf54099-9882-45a1-b769-73ad1dd7d70c-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-2khtx\" (UID: \"3bf54099-9882-45a1-b769-73ad1dd7d70c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-2khtx" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.258978 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3bf54099-9882-45a1-b769-73ad1dd7d70c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-2khtx\" (UID: \"3bf54099-9882-45a1-b769-73ad1dd7d70c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-2khtx" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.259069 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3bf54099-9882-45a1-b769-73ad1dd7d70c-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-2khtx\" (UID: \"3bf54099-9882-45a1-b769-73ad1dd7d70c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-2khtx" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.262500 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.353884 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7lwbz"] Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.354042 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:30:52 crc kubenswrapper[5125]: E1208 19:30:52.354145 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7lwbz" podUID="9a677937-278d-4989-b196-40d5daba436d" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.359769 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3bf54099-9882-45a1-b769-73ad1dd7d70c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-2khtx\" (UID: \"3bf54099-9882-45a1-b769-73ad1dd7d70c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-2khtx" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.359816 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3bf54099-9882-45a1-b769-73ad1dd7d70c-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-2khtx\" (UID: \"3bf54099-9882-45a1-b769-73ad1dd7d70c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-2khtx" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.359845 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bf54099-9882-45a1-b769-73ad1dd7d70c-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-2khtx\" (UID: \"3bf54099-9882-45a1-b769-73ad1dd7d70c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-2khtx" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.359875 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bf54099-9882-45a1-b769-73ad1dd7d70c-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-2khtx\" (UID: \"3bf54099-9882-45a1-b769-73ad1dd7d70c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-2khtx" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.359912 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3bf54099-9882-45a1-b769-73ad1dd7d70c-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-2khtx\" (UID: \"3bf54099-9882-45a1-b769-73ad1dd7d70c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-2khtx" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.360041 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3bf54099-9882-45a1-b769-73ad1dd7d70c-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-2khtx\" (UID: \"3bf54099-9882-45a1-b769-73ad1dd7d70c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-2khtx" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.360379 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3bf54099-9882-45a1-b769-73ad1dd7d70c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-2khtx\" (UID: \"3bf54099-9882-45a1-b769-73ad1dd7d70c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-2khtx" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.360759 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3bf54099-9882-45a1-b769-73ad1dd7d70c-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-2khtx\" (UID: \"3bf54099-9882-45a1-b769-73ad1dd7d70c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-2khtx" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.375640 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bf54099-9882-45a1-b769-73ad1dd7d70c-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-2khtx\" (UID: \"3bf54099-9882-45a1-b769-73ad1dd7d70c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-2khtx" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.384813 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bf54099-9882-45a1-b769-73ad1dd7d70c-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-2khtx\" (UID: \"3bf54099-9882-45a1-b769-73ad1dd7d70c\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-2khtx" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.553570 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-2khtx" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.562422 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.562580 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:52 crc kubenswrapper[5125]: E1208 19:30:52.562641 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:24.562593632 +0000 UTC m=+141.333083906 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:30:52 crc kubenswrapper[5125]: E1208 19:30:52.562720 5125 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:52 crc kubenswrapper[5125]: E1208 19:30:52.562738 5125 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.562733 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:52 crc kubenswrapper[5125]: E1208 19:30:52.562751 5125 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.562780 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:52 crc kubenswrapper[5125]: E1208 19:30:52.562868 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 19:31:24.562849969 +0000 UTC m=+141.333340243 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:52 crc kubenswrapper[5125]: E1208 19:30:52.562959 5125 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:52 crc kubenswrapper[5125]: E1208 19:30:52.563004 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:31:24.562996413 +0000 UTC m=+141.333486687 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 19:30:52 crc kubenswrapper[5125]: E1208 19:30:52.563044 5125 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:52 crc kubenswrapper[5125]: E1208 19:30:52.563072 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 19:31:24.563064675 +0000 UTC m=+141.333555039 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 19:30:52 crc kubenswrapper[5125]: W1208 19:30:52.572839 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bf54099_9882_45a1_b769_73ad1dd7d70c.slice/crio-f8b9f56f491b98f0ec613ed254cc5d0aae4bb302fcf9e4ccbeb44b6f3804354c WatchSource:0}: Error finding container f8b9f56f491b98f0ec613ed254cc5d0aae4bb302fcf9e4ccbeb44b6f3804354c: Status 404 returned error can't find the container with id f8b9f56f491b98f0ec613ed254cc5d0aae4bb302fcf9e4ccbeb44b6f3804354c Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.663577 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a677937-278d-4989-b196-40d5daba436d-metrics-certs\") pod \"network-metrics-daemon-7lwbz\" (UID: \"9a677937-278d-4989-b196-40d5daba436d\") " pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.663659 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:52 crc kubenswrapper[5125]: E1208 19:30:52.663731 5125 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:52 crc kubenswrapper[5125]: E1208 19:30:52.663804 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a677937-278d-4989-b196-40d5daba436d-metrics-certs podName:9a677937-278d-4989-b196-40d5daba436d nodeName:}" failed. No retries permitted until 2025-12-08 19:31:24.663783116 +0000 UTC m=+141.434273440 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9a677937-278d-4989-b196-40d5daba436d-metrics-certs") pod "network-metrics-daemon-7lwbz" (UID: "9a677937-278d-4989-b196-40d5daba436d") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 19:30:52 crc kubenswrapper[5125]: E1208 19:30:52.663833 5125 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 19:30:52 crc kubenswrapper[5125]: E1208 19:30:52.663874 5125 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 19:30:52 crc kubenswrapper[5125]: E1208 19:30:52.663885 5125 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:52 crc kubenswrapper[5125]: E1208 19:30:52.663956 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 19:31:24.66392308 +0000 UTC m=+141.434413354 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.744741 5125 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.752125 5125 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.767054 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:52 crc kubenswrapper[5125]: E1208 19:30:52.767227 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:52 crc kubenswrapper[5125]: I1208 19:30:52.767282 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:52 crc kubenswrapper[5125]: E1208 19:30:52.767374 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:53 crc kubenswrapper[5125]: I1208 19:30:53.237348 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-2khtx" event={"ID":"3bf54099-9882-45a1-b769-73ad1dd7d70c","Type":"ContainerStarted","Data":"84260ebb755415bb66df2a478ebb9585e98c6de4cda2ddcfed62a144bbff2189"} Dec 08 19:30:53 crc kubenswrapper[5125]: I1208 19:30:53.237416 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-2khtx" event={"ID":"3bf54099-9882-45a1-b769-73ad1dd7d70c","Type":"ContainerStarted","Data":"f8b9f56f491b98f0ec613ed254cc5d0aae4bb302fcf9e4ccbeb44b6f3804354c"} Dec 08 19:30:53 crc kubenswrapper[5125]: I1208 19:30:53.239943 5125 generic.go:358] "Generic (PLEG): container finished" podID="e25c18b2-98b7-4c40-a059-08f4821dea99" containerID="d89ee6b67b9467b44542c464fee74bd72349721a8a9b6f986c9106b6b7888d83" exitCode=0 Dec 08 19:30:53 crc kubenswrapper[5125]: I1208 19:30:53.240014 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjgzs" event={"ID":"e25c18b2-98b7-4c40-a059-08f4821dea99","Type":"ContainerDied","Data":"d89ee6b67b9467b44542c464fee74bd72349721a8a9b6f986c9106b6b7888d83"} Dec 08 19:30:53 crc kubenswrapper[5125]: I1208 19:30:53.253601 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-2khtx" podStartSLOduration=90.253583274 podStartE2EDuration="1m30.253583274s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:53.253198564 +0000 UTC m=+110.023688848" watchObservedRunningTime="2025-12-08 19:30:53.253583274 +0000 UTC m=+110.024073548" Dec 08 19:30:53 crc kubenswrapper[5125]: I1208 19:30:53.767204 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:30:53 crc kubenswrapper[5125]: I1208 19:30:53.769249 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:53 crc kubenswrapper[5125]: E1208 19:30:53.769389 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7lwbz" podUID="9a677937-278d-4989-b196-40d5daba436d" Dec 08 19:30:53 crc kubenswrapper[5125]: E1208 19:30:53.769568 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:54 crc kubenswrapper[5125]: I1208 19:30:54.246487 5125 generic.go:358] "Generic (PLEG): container finished" podID="e25c18b2-98b7-4c40-a059-08f4821dea99" containerID="d1f2a394066c1ed33bf61734a5a23511f9c0530db796b4e08197349b114275ec" exitCode=0 Dec 08 19:30:54 crc kubenswrapper[5125]: I1208 19:30:54.247148 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjgzs" event={"ID":"e25c18b2-98b7-4c40-a059-08f4821dea99","Type":"ContainerDied","Data":"d1f2a394066c1ed33bf61734a5a23511f9c0530db796b4e08197349b114275ec"} Dec 08 19:30:54 crc kubenswrapper[5125]: I1208 19:30:54.766937 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:54 crc kubenswrapper[5125]: I1208 19:30:54.767071 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:54 crc kubenswrapper[5125]: E1208 19:30:54.767158 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:54 crc kubenswrapper[5125]: E1208 19:30:54.767302 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:55 crc kubenswrapper[5125]: I1208 19:30:55.256652 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjgzs" event={"ID":"e25c18b2-98b7-4c40-a059-08f4821dea99","Type":"ContainerStarted","Data":"46cab8787ec3e5c0019efaaaf3a0b3640082cb0c1c2fca7a5c3d442762366770"} Dec 08 19:30:55 crc kubenswrapper[5125]: I1208 19:30:55.279088 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-rjgzs" podStartSLOduration=92.279069789 podStartE2EDuration="1m32.279069789s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:30:55.278103243 +0000 UTC m=+112.048593527" watchObservedRunningTime="2025-12-08 19:30:55.279069789 +0000 UTC m=+112.049560073" Dec 08 19:30:55 crc kubenswrapper[5125]: I1208 19:30:55.767138 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:30:55 crc kubenswrapper[5125]: I1208 19:30:55.767138 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:55 crc kubenswrapper[5125]: E1208 19:30:55.767311 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7lwbz" podUID="9a677937-278d-4989-b196-40d5daba436d" Dec 08 19:30:55 crc kubenswrapper[5125]: E1208 19:30:55.767329 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 19:30:56 crc kubenswrapper[5125]: I1208 19:30:56.766559 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:56 crc kubenswrapper[5125]: E1208 19:30:56.766948 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 19:30:56 crc kubenswrapper[5125]: I1208 19:30:56.766684 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:56 crc kubenswrapper[5125]: E1208 19:30:56.767144 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 19:30:56 crc kubenswrapper[5125]: I1208 19:30:56.843400 5125 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Dec 08 19:30:56 crc kubenswrapper[5125]: I1208 19:30:56.843717 5125 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Dec 08 19:30:56 crc kubenswrapper[5125]: I1208 19:30:56.889678 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-ckwnh"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.290841 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-ckwnh" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.290857 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-h5dj4"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.293974 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.294464 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.294886 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.295092 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.295268 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.416591 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klpn8\" (UniqueName: \"kubernetes.io/projected/f75614c9-b518-4c59-bd9f-259d9f410e76-kube-api-access-klpn8\") pod \"openshift-apiserver-operator-846cbfc458-ckwnh\" (UID: \"f75614c9-b518-4c59-bd9f-259d9f410e76\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-ckwnh" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.416678 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f75614c9-b518-4c59-bd9f-259d9f410e76-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-ckwnh\" (UID: \"f75614c9-b518-4c59-bd9f-259d9f410e76\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-ckwnh" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.416704 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f75614c9-b518-4c59-bd9f-259d9f410e76-config\") pod \"openshift-apiserver-operator-846cbfc458-ckwnh\" (UID: \"f75614c9-b518-4c59-bd9f-259d9f410e76\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-ckwnh" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.518055 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-klpn8\" (UniqueName: \"kubernetes.io/projected/f75614c9-b518-4c59-bd9f-259d9f410e76-kube-api-access-klpn8\") pod \"openshift-apiserver-operator-846cbfc458-ckwnh\" (UID: \"f75614c9-b518-4c59-bd9f-259d9f410e76\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-ckwnh" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.518104 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f75614c9-b518-4c59-bd9f-259d9f410e76-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-ckwnh\" (UID: \"f75614c9-b518-4c59-bd9f-259d9f410e76\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-ckwnh" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.518347 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f75614c9-b518-4c59-bd9f-259d9f410e76-config\") pod \"openshift-apiserver-operator-846cbfc458-ckwnh\" (UID: \"f75614c9-b518-4c59-bd9f-259d9f410e76\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-ckwnh" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.519274 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f75614c9-b518-4c59-bd9f-259d9f410e76-config\") pod \"openshift-apiserver-operator-846cbfc458-ckwnh\" (UID: \"f75614c9-b518-4c59-bd9f-259d9f410e76\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-ckwnh" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.524592 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f75614c9-b518-4c59-bd9f-259d9f410e76-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-ckwnh\" (UID: \"f75614c9-b518-4c59-bd9f-259d9f410e76\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-ckwnh" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.526466 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-7qxb2"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.526644 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-h5dj4" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.528975 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.529467 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.530090 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.530199 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.530242 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.530937 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.532347 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-cdw7h"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.532499 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-7qxb2" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.535517 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.535586 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.535712 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.535742 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.536058 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.536355 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.537958 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-tm7d5"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.538302 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-klpn8\" (UniqueName: \"kubernetes.io/projected/f75614c9-b518-4c59-bd9f-259d9f410e76-kube-api-access-klpn8\") pod \"openshift-apiserver-operator-846cbfc458-ckwnh\" (UID: \"f75614c9-b518-4c59-bd9f-259d9f410e76\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-ckwnh" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.538636 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-cdw7h" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.540460 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.541309 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.541515 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-2wvch"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.541791 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-tm7d5" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.541946 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.542181 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.542330 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.543259 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.544134 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-fwfm2"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.546670 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.546998 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.547360 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.547747 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.548874 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.549489 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.549690 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.549882 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.550060 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.550437 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.550681 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-7qjcm"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.551012 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-fwfm2" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.551013 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.551287 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.555237 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-v5nx6"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.559971 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7qjcm" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.564758 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.565332 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.565769 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.566894 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.567060 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-5xzhq"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.567442 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.567699 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.572517 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.572822 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.572826 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.572898 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.572936 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.573024 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.573258 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.573316 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.573442 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.573758 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.573980 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.574047 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.574104 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.574199 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.574209 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.574306 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.574359 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.574565 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.575259 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.581570 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.581629 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.582294 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-bfrm9"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.582468 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.583049 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-5xzhq" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.583783 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.584529 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.584681 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.584745 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.585450 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.585468 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.585563 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.586785 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.587763 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.589155 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.589362 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.591465 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.591502 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.593760 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-t8fbs"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.594490 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-bfrm9" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.596138 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.599018 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.600719 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.600755 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.600779 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.600905 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.601405 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.601601 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.601762 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.614020 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.614192 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-8pnd7"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.614636 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-t8fbs" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.615184 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.617506 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-ckwnh" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.618337 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.618970 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.619118 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.619163 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkjf7\" (UniqueName: \"kubernetes.io/projected/a7df9f2f-5671-4d9d-a30c-e2d504d7d7f1-kube-api-access-tkjf7\") pod \"dns-operator-799b87ffcd-fwfm2\" (UID: \"a7df9f2f-5671-4d9d-a30c-e2d504d7d7f1\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-fwfm2" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.619197 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1a21f262-041c-4938-bf1c-9ba06822ff62-auth-proxy-config\") pod \"machine-approver-54c688565-h5dj4\" (UID: \"1a21f262-041c-4938-bf1c-9ba06822ff62\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-h5dj4" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.619244 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/89670568-cddd-4d5c-9a13-c8e6bdc340aa-tmp\") pod \"cluster-image-registry-operator-86c45576b9-7qjcm\" (UID: \"89670568-cddd-4d5c-9a13-c8e6bdc340aa\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7qjcm" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.619271 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1872a46a-0e1f-469d-b403-8a1e0805d291-client-ca\") pod \"route-controller-manager-776cdc94d6-lrh8v\" (UID: \"1872a46a-0e1f-469d-b403-8a1e0805d291\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.619296 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1872a46a-0e1f-469d-b403-8a1e0805d291-tmp\") pod \"route-controller-manager-776cdc94d6-lrh8v\" (UID: \"1872a46a-0e1f-469d-b403-8a1e0805d291\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.619322 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrww7\" (UniqueName: \"kubernetes.io/projected/965dbdfc-98cd-4eea-847b-36256d95a95e-kube-api-access-mrww7\") pod \"openshift-config-operator-5777786469-5xzhq\" (UID: \"965dbdfc-98cd-4eea-847b-36256d95a95e\") " pod="openshift-config-operator/openshift-config-operator-5777786469-5xzhq" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.619349 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4692\" (UniqueName: \"kubernetes.io/projected/92837ccf-1e39-495e-bbcb-d3eaafd95d15-kube-api-access-p4692\") pod \"console-64d44f6ddf-cdw7h\" (UID: \"92837ccf-1e39-495e-bbcb-d3eaafd95d15\") " pod="openshift-console/console-64d44f6ddf-cdw7h" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.619389 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a21f262-041c-4938-bf1c-9ba06822ff62-config\") pod \"machine-approver-54c688565-h5dj4\" (UID: \"1a21f262-041c-4938-bf1c-9ba06822ff62\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-h5dj4" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.619414 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/89670568-cddd-4d5c-9a13-c8e6bdc340aa-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-7qjcm\" (UID: \"89670568-cddd-4d5c-9a13-c8e6bdc340aa\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7qjcm" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.619447 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/360637cf-82f2-4c3f-8007-8669c23e631c-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-7qxb2\" (UID: \"360637cf-82f2-4c3f-8007-8669c23e631c\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-7qxb2" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.619475 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.619503 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/92837ccf-1e39-495e-bbcb-d3eaafd95d15-oauth-serving-cert\") pod \"console-64d44f6ddf-cdw7h\" (UID: \"92837ccf-1e39-495e-bbcb-d3eaafd95d15\") " pod="openshift-console/console-64d44f6ddf-cdw7h" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.619532 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/69e82e98-c3d1-4cdd-9657-609e9e9b78d0-images\") pod \"machine-api-operator-755bb95488-tm7d5\" (UID: \"69e82e98-c3d1-4cdd-9657-609e9e9b78d0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-tm7d5" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.619560 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.619590 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/92837ccf-1e39-495e-bbcb-d3eaafd95d15-service-ca\") pod \"console-64d44f6ddf-cdw7h\" (UID: \"92837ccf-1e39-495e-bbcb-d3eaafd95d15\") " pod="openshift-console/console-64d44f6ddf-cdw7h" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.619638 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dtzs\" (UniqueName: \"kubernetes.io/projected/1a21f262-041c-4938-bf1c-9ba06822ff62-kube-api-access-9dtzs\") pod \"machine-approver-54c688565-h5dj4\" (UID: \"1a21f262-041c-4938-bf1c-9ba06822ff62\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-h5dj4" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.619881 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fb139a6e-970e-4662-8bef-8155c86676c4-image-import-ca\") pod \"apiserver-9ddfb9f55-v5nx6\" (UID: \"fb139a6e-970e-4662-8bef-8155c86676c4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.619955 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/965dbdfc-98cd-4eea-847b-36256d95a95e-available-featuregates\") pod \"openshift-config-operator-5777786469-5xzhq\" (UID: \"965dbdfc-98cd-4eea-847b-36256d95a95e\") " pod="openshift-config-operator/openshift-config-operator-5777786469-5xzhq" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.619996 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.620013 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-8pnd7" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.620022 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a7df9f2f-5671-4d9d-a30c-e2d504d7d7f1-tmp-dir\") pod \"dns-operator-799b87ffcd-fwfm2\" (UID: \"a7df9f2f-5671-4d9d-a30c-e2d504d7d7f1\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-fwfm2" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.620094 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cdb7a298-ac30-410b-9ab7-a060a428e88b-audit-dir\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.620151 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/92837ccf-1e39-495e-bbcb-d3eaafd95d15-console-oauth-config\") pod \"console-64d44f6ddf-cdw7h\" (UID: \"92837ccf-1e39-495e-bbcb-d3eaafd95d15\") " pod="openshift-console/console-64d44f6ddf-cdw7h" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.620194 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbgtx\" (UniqueName: \"kubernetes.io/projected/360637cf-82f2-4c3f-8007-8669c23e631c-kube-api-access-nbgtx\") pod \"authentication-operator-7f5c659b84-7qxb2\" (UID: \"360637cf-82f2-4c3f-8007-8669c23e631c\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-7qxb2" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.620248 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92837ccf-1e39-495e-bbcb-d3eaafd95d15-trusted-ca-bundle\") pod \"console-64d44f6ddf-cdw7h\" (UID: \"92837ccf-1e39-495e-bbcb-d3eaafd95d15\") " pod="openshift-console/console-64d44f6ddf-cdw7h" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.620326 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cdb7a298-ac30-410b-9ab7-a060a428e88b-audit-policies\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.620360 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.620499 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1872a46a-0e1f-469d-b403-8a1e0805d291-config\") pod \"route-controller-manager-776cdc94d6-lrh8v\" (UID: \"1872a46a-0e1f-469d-b403-8a1e0805d291\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.620559 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x52dv\" (UniqueName: \"kubernetes.io/projected/89670568-cddd-4d5c-9a13-c8e6bdc340aa-kube-api-access-x52dv\") pod \"cluster-image-registry-operator-86c45576b9-7qjcm\" (UID: \"89670568-cddd-4d5c-9a13-c8e6bdc340aa\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7qjcm" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.620828 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/965dbdfc-98cd-4eea-847b-36256d95a95e-serving-cert\") pod \"openshift-config-operator-5777786469-5xzhq\" (UID: \"965dbdfc-98cd-4eea-847b-36256d95a95e\") " pod="openshift-config-operator/openshift-config-operator-5777786469-5xzhq" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.620987 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.621035 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.621069 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89670568-cddd-4d5c-9a13-c8e6bdc340aa-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-7qjcm\" (UID: \"89670568-cddd-4d5c-9a13-c8e6bdc340aa\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7qjcm" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.621101 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb139a6e-970e-4662-8bef-8155c86676c4-serving-cert\") pod \"apiserver-9ddfb9f55-v5nx6\" (UID: \"fb139a6e-970e-4662-8bef-8155c86676c4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.621133 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/360637cf-82f2-4c3f-8007-8669c23e631c-serving-cert\") pod \"authentication-operator-7f5c659b84-7qxb2\" (UID: \"360637cf-82f2-4c3f-8007-8669c23e631c\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-7qxb2" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.621171 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a7df9f2f-5671-4d9d-a30c-e2d504d7d7f1-metrics-tls\") pod \"dns-operator-799b87ffcd-fwfm2\" (UID: \"a7df9f2f-5671-4d9d-a30c-e2d504d7d7f1\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-fwfm2" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.621202 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.621224 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1872a46a-0e1f-469d-b403-8a1e0805d291-serving-cert\") pod \"route-controller-manager-776cdc94d6-lrh8v\" (UID: \"1872a46a-0e1f-469d-b403-8a1e0805d291\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.621251 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/360637cf-82f2-4c3f-8007-8669c23e631c-config\") pod \"authentication-operator-7f5c659b84-7qxb2\" (UID: \"360637cf-82f2-4c3f-8007-8669c23e631c\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-7qxb2" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.621276 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.621317 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/69e82e98-c3d1-4cdd-9657-609e9e9b78d0-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-tm7d5\" (UID: \"69e82e98-c3d1-4cdd-9657-609e9e9b78d0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-tm7d5" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.621558 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4pmd\" (UniqueName: \"kubernetes.io/projected/69e82e98-c3d1-4cdd-9657-609e9e9b78d0-kube-api-access-w4pmd\") pod \"machine-api-operator-755bb95488-tm7d5\" (UID: \"69e82e98-c3d1-4cdd-9657-609e9e9b78d0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-tm7d5" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.621587 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fb139a6e-970e-4662-8bef-8155c86676c4-node-pullsecrets\") pod \"apiserver-9ddfb9f55-v5nx6\" (UID: \"fb139a6e-970e-4662-8bef-8155c86676c4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.621634 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fb139a6e-970e-4662-8bef-8155c86676c4-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-v5nx6\" (UID: \"fb139a6e-970e-4662-8bef-8155c86676c4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.621667 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/92837ccf-1e39-495e-bbcb-d3eaafd95d15-console-config\") pod \"console-64d44f6ddf-cdw7h\" (UID: \"92837ccf-1e39-495e-bbcb-d3eaafd95d15\") " pod="openshift-console/console-64d44f6ddf-cdw7h" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.621701 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1a21f262-041c-4938-bf1c-9ba06822ff62-machine-approver-tls\") pod \"machine-approver-54c688565-h5dj4\" (UID: \"1a21f262-041c-4938-bf1c-9ba06822ff62\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-h5dj4" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.621920 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb139a6e-970e-4662-8bef-8155c86676c4-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-v5nx6\" (UID: \"fb139a6e-970e-4662-8bef-8155c86676c4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.621951 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/89670568-cddd-4d5c-9a13-c8e6bdc340aa-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-7qjcm\" (UID: \"89670568-cddd-4d5c-9a13-c8e6bdc340aa\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7qjcm" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.622025 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.622147 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fb139a6e-970e-4662-8bef-8155c86676c4-audit\") pod \"apiserver-9ddfb9f55-v5nx6\" (UID: \"fb139a6e-970e-4662-8bef-8155c86676c4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.623447 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.623848 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.624232 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.624664 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/89670568-cddd-4d5c-9a13-c8e6bdc340aa-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-7qjcm\" (UID: \"89670568-cddd-4d5c-9a13-c8e6bdc340aa\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7qjcm" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.624886 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.625052 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.625400 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-hgxtj"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.626318 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.627822 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.628795 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/92837ccf-1e39-495e-bbcb-d3eaafd95d15-console-serving-cert\") pod \"console-64d44f6ddf-cdw7h\" (UID: \"92837ccf-1e39-495e-bbcb-d3eaafd95d15\") " pod="openshift-console/console-64d44f6ddf-cdw7h" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.629093 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fb139a6e-970e-4662-8bef-8155c86676c4-encryption-config\") pod \"apiserver-9ddfb9f55-v5nx6\" (UID: \"fb139a6e-970e-4662-8bef-8155c86676c4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.629145 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4krl\" (UniqueName: \"kubernetes.io/projected/1872a46a-0e1f-469d-b403-8a1e0805d291-kube-api-access-t4krl\") pod \"route-controller-manager-776cdc94d6-lrh8v\" (UID: \"1872a46a-0e1f-469d-b403-8a1e0805d291\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.629183 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22br8\" (UniqueName: \"kubernetes.io/projected/cdb7a298-ac30-410b-9ab7-a060a428e88b-kube-api-access-22br8\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.629251 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/360637cf-82f2-4c3f-8007-8669c23e631c-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-7qxb2\" (UID: \"360637cf-82f2-4c3f-8007-8669c23e631c\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-7qxb2" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.629305 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.629341 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb139a6e-970e-4662-8bef-8155c86676c4-config\") pod \"apiserver-9ddfb9f55-v5nx6\" (UID: \"fb139a6e-970e-4662-8bef-8155c86676c4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.629441 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fb139a6e-970e-4662-8bef-8155c86676c4-audit-dir\") pod \"apiserver-9ddfb9f55-v5nx6\" (UID: \"fb139a6e-970e-4662-8bef-8155c86676c4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.629579 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljkk8\" (UniqueName: \"kubernetes.io/projected/fb139a6e-970e-4662-8bef-8155c86676c4-kube-api-access-ljkk8\") pod \"apiserver-9ddfb9f55-v5nx6\" (UID: \"fb139a6e-970e-4662-8bef-8155c86676c4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.629695 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69e82e98-c3d1-4cdd-9657-609e9e9b78d0-config\") pod \"machine-api-operator-755bb95488-tm7d5\" (UID: \"69e82e98-c3d1-4cdd-9657-609e9e9b78d0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-tm7d5" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.629733 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fb139a6e-970e-4662-8bef-8155c86676c4-etcd-client\") pod \"apiserver-9ddfb9f55-v5nx6\" (UID: \"fb139a6e-970e-4662-8bef-8155c86676c4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.631688 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.632741 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.632933 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.633190 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.633358 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.632777 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.633600 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.634210 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.635110 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.635192 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sbhlq"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.635376 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.635381 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.635228 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.639228 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.639864 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-t2wgb"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.640404 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sbhlq" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.640423 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.643134 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8xhhx"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.643467 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-t2wgb" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.645991 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-9z4ll"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.646178 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8xhhx" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.649598 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-kr9dh"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.650708 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-9z4ll" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.660375 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.700069 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.720113 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.730341 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/69e82e98-c3d1-4cdd-9657-609e9e9b78d0-images\") pod \"machine-api-operator-755bb95488-tm7d5\" (UID: \"69e82e98-c3d1-4cdd-9657-609e9e9b78d0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-tm7d5" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.730383 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.730409 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/92837ccf-1e39-495e-bbcb-d3eaafd95d15-service-ca\") pod \"console-64d44f6ddf-cdw7h\" (UID: \"92837ccf-1e39-495e-bbcb-d3eaafd95d15\") " pod="openshift-console/console-64d44f6ddf-cdw7h" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.730428 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9dtzs\" (UniqueName: \"kubernetes.io/projected/1a21f262-041c-4938-bf1c-9ba06822ff62-kube-api-access-9dtzs\") pod \"machine-approver-54c688565-h5dj4\" (UID: \"1a21f262-041c-4938-bf1c-9ba06822ff62\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-h5dj4" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.730443 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fb139a6e-970e-4662-8bef-8155c86676c4-image-import-ca\") pod \"apiserver-9ddfb9f55-v5nx6\" (UID: \"fb139a6e-970e-4662-8bef-8155c86676c4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.730464 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/965dbdfc-98cd-4eea-847b-36256d95a95e-available-featuregates\") pod \"openshift-config-operator-5777786469-5xzhq\" (UID: \"965dbdfc-98cd-4eea-847b-36256d95a95e\") " pod="openshift-config-operator/openshift-config-operator-5777786469-5xzhq" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.730780 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.730896 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a7df9f2f-5671-4d9d-a30c-e2d504d7d7f1-tmp-dir\") pod \"dns-operator-799b87ffcd-fwfm2\" (UID: \"a7df9f2f-5671-4d9d-a30c-e2d504d7d7f1\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-fwfm2" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.730989 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cdb7a298-ac30-410b-9ab7-a060a428e88b-audit-dir\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.731070 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/92837ccf-1e39-495e-bbcb-d3eaafd95d15-console-oauth-config\") pod \"console-64d44f6ddf-cdw7h\" (UID: \"92837ccf-1e39-495e-bbcb-d3eaafd95d15\") " pod="openshift-console/console-64d44f6ddf-cdw7h" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.731165 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/840b1c0b-8303-40bb-a881-8a974ea23710-tmp\") pod \"controller-manager-65b6cccf98-8pnd7\" (UID: \"840b1c0b-8303-40bb-a881-8a974ea23710\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8pnd7" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.731281 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nbgtx\" (UniqueName: \"kubernetes.io/projected/360637cf-82f2-4c3f-8007-8669c23e631c-kube-api-access-nbgtx\") pod \"authentication-operator-7f5c659b84-7qxb2\" (UID: \"360637cf-82f2-4c3f-8007-8669c23e631c\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-7qxb2" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.731381 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92837ccf-1e39-495e-bbcb-d3eaafd95d15-trusted-ca-bundle\") pod \"console-64d44f6ddf-cdw7h\" (UID: \"92837ccf-1e39-495e-bbcb-d3eaafd95d15\") " pod="openshift-console/console-64d44f6ddf-cdw7h" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.731550 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a7df9f2f-5671-4d9d-a30c-e2d504d7d7f1-tmp-dir\") pod \"dns-operator-799b87ffcd-fwfm2\" (UID: \"a7df9f2f-5671-4d9d-a30c-e2d504d7d7f1\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-fwfm2" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.731481 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fbd52e79-1f71-46e5-8170-270ba85e62df-etcd-client\") pod \"apiserver-8596bd845d-pkvvc\" (UID: \"fbd52e79-1f71-46e5-8170-270ba85e62df\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.731967 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cdb7a298-ac30-410b-9ab7-a060a428e88b-audit-policies\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.731871 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/92837ccf-1e39-495e-bbcb-d3eaafd95d15-service-ca\") pod \"console-64d44f6ddf-cdw7h\" (UID: \"92837ccf-1e39-495e-bbcb-d3eaafd95d15\") " pod="openshift-console/console-64d44f6ddf-cdw7h" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.731896 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cdb7a298-ac30-410b-9ab7-a060a428e88b-audit-dir\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.732283 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.732526 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1872a46a-0e1f-469d-b403-8a1e0805d291-config\") pod \"route-controller-manager-776cdc94d6-lrh8v\" (UID: \"1872a46a-0e1f-469d-b403-8a1e0805d291\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.732655 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x52dv\" (UniqueName: \"kubernetes.io/projected/89670568-cddd-4d5c-9a13-c8e6bdc340aa-kube-api-access-x52dv\") pod \"cluster-image-registry-operator-86c45576b9-7qjcm\" (UID: \"89670568-cddd-4d5c-9a13-c8e6bdc340aa\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7qjcm" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.732768 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/965dbdfc-98cd-4eea-847b-36256d95a95e-serving-cert\") pod \"openshift-config-operator-5777786469-5xzhq\" (UID: \"965dbdfc-98cd-4eea-847b-36256d95a95e\") " pod="openshift-config-operator/openshift-config-operator-5777786469-5xzhq" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.732879 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/840b1c0b-8303-40bb-a881-8a974ea23710-serving-cert\") pod \"controller-manager-65b6cccf98-8pnd7\" (UID: \"840b1c0b-8303-40bb-a881-8a974ea23710\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8pnd7" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.733099 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.733200 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.733310 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89670568-cddd-4d5c-9a13-c8e6bdc340aa-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-7qjcm\" (UID: \"89670568-cddd-4d5c-9a13-c8e6bdc340aa\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7qjcm" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.733420 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb139a6e-970e-4662-8bef-8155c86676c4-serving-cert\") pod \"apiserver-9ddfb9f55-v5nx6\" (UID: \"fb139a6e-970e-4662-8bef-8155c86676c4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.733519 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbd52e79-1f71-46e5-8170-270ba85e62df-serving-cert\") pod \"apiserver-8596bd845d-pkvvc\" (UID: \"fbd52e79-1f71-46e5-8170-270ba85e62df\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.733646 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/840b1c0b-8303-40bb-a881-8a974ea23710-config\") pod \"controller-manager-65b6cccf98-8pnd7\" (UID: \"840b1c0b-8303-40bb-a881-8a974ea23710\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8pnd7" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.733765 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/360637cf-82f2-4c3f-8007-8669c23e631c-serving-cert\") pod \"authentication-operator-7f5c659b84-7qxb2\" (UID: \"360637cf-82f2-4c3f-8007-8669c23e631c\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-7qxb2" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.733865 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a7df9f2f-5671-4d9d-a30c-e2d504d7d7f1-metrics-tls\") pod \"dns-operator-799b87ffcd-fwfm2\" (UID: \"a7df9f2f-5671-4d9d-a30c-e2d504d7d7f1\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-fwfm2" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.733975 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.733983 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.734041 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1872a46a-0e1f-469d-b403-8a1e0805d291-serving-cert\") pod \"route-controller-manager-776cdc94d6-lrh8v\" (UID: \"1872a46a-0e1f-469d-b403-8a1e0805d291\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.734072 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8czw\" (UniqueName: \"kubernetes.io/projected/fbd52e79-1f71-46e5-8170-270ba85e62df-kube-api-access-c8czw\") pod \"apiserver-8596bd845d-pkvvc\" (UID: \"fbd52e79-1f71-46e5-8170-270ba85e62df\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.734105 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/360637cf-82f2-4c3f-8007-8669c23e631c-config\") pod \"authentication-operator-7f5c659b84-7qxb2\" (UID: \"360637cf-82f2-4c3f-8007-8669c23e631c\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-7qxb2" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.734126 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.734165 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/69e82e98-c3d1-4cdd-9657-609e9e9b78d0-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-tm7d5\" (UID: \"69e82e98-c3d1-4cdd-9657-609e9e9b78d0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-tm7d5" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.734183 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w4pmd\" (UniqueName: \"kubernetes.io/projected/69e82e98-c3d1-4cdd-9657-609e9e9b78d0-kube-api-access-w4pmd\") pod \"machine-api-operator-755bb95488-tm7d5\" (UID: \"69e82e98-c3d1-4cdd-9657-609e9e9b78d0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-tm7d5" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.734203 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fb139a6e-970e-4662-8bef-8155c86676c4-node-pullsecrets\") pod \"apiserver-9ddfb9f55-v5nx6\" (UID: \"fb139a6e-970e-4662-8bef-8155c86676c4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.734226 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fb139a6e-970e-4662-8bef-8155c86676c4-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-v5nx6\" (UID: \"fb139a6e-970e-4662-8bef-8155c86676c4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.734282 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fbd52e79-1f71-46e5-8170-270ba85e62df-encryption-config\") pod \"apiserver-8596bd845d-pkvvc\" (UID: \"fbd52e79-1f71-46e5-8170-270ba85e62df\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.734314 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1872a46a-0e1f-469d-b403-8a1e0805d291-config\") pod \"route-controller-manager-776cdc94d6-lrh8v\" (UID: \"1872a46a-0e1f-469d-b403-8a1e0805d291\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.734449 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/92837ccf-1e39-495e-bbcb-d3eaafd95d15-console-oauth-config\") pod \"console-64d44f6ddf-cdw7h\" (UID: \"92837ccf-1e39-495e-bbcb-d3eaafd95d15\") " pod="openshift-console/console-64d44f6ddf-cdw7h" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.734534 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.732722 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cdb7a298-ac30-410b-9ab7-a060a428e88b-audit-policies\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.734730 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.734823 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/fb139a6e-970e-4662-8bef-8155c86676c4-node-pullsecrets\") pod \"apiserver-9ddfb9f55-v5nx6\" (UID: \"fb139a6e-970e-4662-8bef-8155c86676c4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.734956 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/360637cf-82f2-4c3f-8007-8669c23e631c-config\") pod \"authentication-operator-7f5c659b84-7qxb2\" (UID: \"360637cf-82f2-4c3f-8007-8669c23e631c\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-7qxb2" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.734317 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/92837ccf-1e39-495e-bbcb-d3eaafd95d15-console-config\") pod \"console-64d44f6ddf-cdw7h\" (UID: \"92837ccf-1e39-495e-bbcb-d3eaafd95d15\") " pod="openshift-console/console-64d44f6ddf-cdw7h" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.735029 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1a21f262-041c-4938-bf1c-9ba06822ff62-machine-approver-tls\") pod \"machine-approver-54c688565-h5dj4\" (UID: \"1a21f262-041c-4938-bf1c-9ba06822ff62\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-h5dj4" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.735082 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb139a6e-970e-4662-8bef-8155c86676c4-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-v5nx6\" (UID: \"fb139a6e-970e-4662-8bef-8155c86676c4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.735108 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/89670568-cddd-4d5c-9a13-c8e6bdc340aa-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-7qjcm\" (UID: \"89670568-cddd-4d5c-9a13-c8e6bdc340aa\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7qjcm" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.735145 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fb139a6e-970e-4662-8bef-8155c86676c4-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-v5nx6\" (UID: \"fb139a6e-970e-4662-8bef-8155c86676c4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.735135 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fbd52e79-1f71-46e5-8170-270ba85e62df-audit-policies\") pod \"apiserver-8596bd845d-pkvvc\" (UID: \"fbd52e79-1f71-46e5-8170-270ba85e62df\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.735226 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.735278 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fb139a6e-970e-4662-8bef-8155c86676c4-audit\") pod \"apiserver-9ddfb9f55-v5nx6\" (UID: \"fb139a6e-970e-4662-8bef-8155c86676c4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.735302 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/89670568-cddd-4d5c-9a13-c8e6bdc340aa-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-7qjcm\" (UID: \"89670568-cddd-4d5c-9a13-c8e6bdc340aa\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7qjcm" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.735327 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/92837ccf-1e39-495e-bbcb-d3eaafd95d15-console-serving-cert\") pod \"console-64d44f6ddf-cdw7h\" (UID: \"92837ccf-1e39-495e-bbcb-d3eaafd95d15\") " pod="openshift-console/console-64d44f6ddf-cdw7h" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.735352 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fb139a6e-970e-4662-8bef-8155c86676c4-encryption-config\") pod \"apiserver-9ddfb9f55-v5nx6\" (UID: \"fb139a6e-970e-4662-8bef-8155c86676c4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.735378 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t4krl\" (UniqueName: \"kubernetes.io/projected/1872a46a-0e1f-469d-b403-8a1e0805d291-kube-api-access-t4krl\") pod \"route-controller-manager-776cdc94d6-lrh8v\" (UID: \"1872a46a-0e1f-469d-b403-8a1e0805d291\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.735404 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbd52e79-1f71-46e5-8170-270ba85e62df-trusted-ca-bundle\") pod \"apiserver-8596bd845d-pkvvc\" (UID: \"fbd52e79-1f71-46e5-8170-270ba85e62df\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.735433 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-22br8\" (UniqueName: \"kubernetes.io/projected/cdb7a298-ac30-410b-9ab7-a060a428e88b-kube-api-access-22br8\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.735470 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/360637cf-82f2-4c3f-8007-8669c23e631c-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-7qxb2\" (UID: \"360637cf-82f2-4c3f-8007-8669c23e631c\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-7qxb2" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.735498 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.735520 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb139a6e-970e-4662-8bef-8155c86676c4-config\") pod \"apiserver-9ddfb9f55-v5nx6\" (UID: \"fb139a6e-970e-4662-8bef-8155c86676c4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.735542 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fb139a6e-970e-4662-8bef-8155c86676c4-audit-dir\") pod \"apiserver-9ddfb9f55-v5nx6\" (UID: \"fb139a6e-970e-4662-8bef-8155c86676c4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.735562 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ljkk8\" (UniqueName: \"kubernetes.io/projected/fb139a6e-970e-4662-8bef-8155c86676c4-kube-api-access-ljkk8\") pod \"apiserver-9ddfb9f55-v5nx6\" (UID: \"fb139a6e-970e-4662-8bef-8155c86676c4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.735573 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/965dbdfc-98cd-4eea-847b-36256d95a95e-available-featuregates\") pod \"openshift-config-operator-5777786469-5xzhq\" (UID: \"965dbdfc-98cd-4eea-847b-36256d95a95e\") " pod="openshift-config-operator/openshift-config-operator-5777786469-5xzhq" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.735595 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69e82e98-c3d1-4cdd-9657-609e9e9b78d0-config\") pod \"machine-api-operator-755bb95488-tm7d5\" (UID: \"69e82e98-c3d1-4cdd-9657-609e9e9b78d0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-tm7d5" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.735640 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fb139a6e-970e-4662-8bef-8155c86676c4-etcd-client\") pod \"apiserver-9ddfb9f55-v5nx6\" (UID: \"fb139a6e-970e-4662-8bef-8155c86676c4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.735664 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/840b1c0b-8303-40bb-a881-8a974ea23710-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-8pnd7\" (UID: \"840b1c0b-8303-40bb-a881-8a974ea23710\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8pnd7" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.735685 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fbd52e79-1f71-46e5-8170-270ba85e62df-etcd-serving-ca\") pod \"apiserver-8596bd845d-pkvvc\" (UID: \"fbd52e79-1f71-46e5-8170-270ba85e62df\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.735731 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.735757 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tkjf7\" (UniqueName: \"kubernetes.io/projected/a7df9f2f-5671-4d9d-a30c-e2d504d7d7f1-kube-api-access-tkjf7\") pod \"dns-operator-799b87ffcd-fwfm2\" (UID: \"a7df9f2f-5671-4d9d-a30c-e2d504d7d7f1\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-fwfm2" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.735780 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1a21f262-041c-4938-bf1c-9ba06822ff62-auth-proxy-config\") pod \"machine-approver-54c688565-h5dj4\" (UID: \"1a21f262-041c-4938-bf1c-9ba06822ff62\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-h5dj4" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.735817 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/89670568-cddd-4d5c-9a13-c8e6bdc340aa-tmp\") pod \"cluster-image-registry-operator-86c45576b9-7qjcm\" (UID: \"89670568-cddd-4d5c-9a13-c8e6bdc340aa\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7qjcm" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.733010 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92837ccf-1e39-495e-bbcb-d3eaafd95d15-trusted-ca-bundle\") pod \"console-64d44f6ddf-cdw7h\" (UID: \"92837ccf-1e39-495e-bbcb-d3eaafd95d15\") " pod="openshift-console/console-64d44f6ddf-cdw7h" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.736232 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/92837ccf-1e39-495e-bbcb-d3eaafd95d15-console-config\") pod \"console-64d44f6ddf-cdw7h\" (UID: \"92837ccf-1e39-495e-bbcb-d3eaafd95d15\") " pod="openshift-console/console-64d44f6ddf-cdw7h" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.736244 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb139a6e-970e-4662-8bef-8155c86676c4-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-v5nx6\" (UID: \"fb139a6e-970e-4662-8bef-8155c86676c4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.736532 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/360637cf-82f2-4c3f-8007-8669c23e631c-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-7qxb2\" (UID: \"360637cf-82f2-4c3f-8007-8669c23e631c\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-7qxb2" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.736542 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89670568-cddd-4d5c-9a13-c8e6bdc340aa-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-7qjcm\" (UID: \"89670568-cddd-4d5c-9a13-c8e6bdc340aa\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7qjcm" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.736844 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fb139a6e-970e-4662-8bef-8155c86676c4-audit-dir\") pod \"apiserver-9ddfb9f55-v5nx6\" (UID: \"fb139a6e-970e-4662-8bef-8155c86676c4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.736990 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.737202 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.737543 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb139a6e-970e-4662-8bef-8155c86676c4-config\") pod \"apiserver-9ddfb9f55-v5nx6\" (UID: \"fb139a6e-970e-4662-8bef-8155c86676c4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.737840 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69e82e98-c3d1-4cdd-9657-609e9e9b78d0-config\") pod \"machine-api-operator-755bb95488-tm7d5\" (UID: \"69e82e98-c3d1-4cdd-9657-609e9e9b78d0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-tm7d5" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.737901 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/360637cf-82f2-4c3f-8007-8669c23e631c-serving-cert\") pod \"authentication-operator-7f5c659b84-7qxb2\" (UID: \"360637cf-82f2-4c3f-8007-8669c23e631c\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-7qxb2" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.735842 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/840b1c0b-8303-40bb-a881-8a974ea23710-client-ca\") pod \"controller-manager-65b6cccf98-8pnd7\" (UID: \"840b1c0b-8303-40bb-a881-8a974ea23710\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8pnd7" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.737978 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1872a46a-0e1f-469d-b403-8a1e0805d291-client-ca\") pod \"route-controller-manager-776cdc94d6-lrh8v\" (UID: \"1872a46a-0e1f-469d-b403-8a1e0805d291\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.738002 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1872a46a-0e1f-469d-b403-8a1e0805d291-tmp\") pod \"route-controller-manager-776cdc94d6-lrh8v\" (UID: \"1872a46a-0e1f-469d-b403-8a1e0805d291\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.738028 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mrww7\" (UniqueName: \"kubernetes.io/projected/965dbdfc-98cd-4eea-847b-36256d95a95e-kube-api-access-mrww7\") pod \"openshift-config-operator-5777786469-5xzhq\" (UID: \"965dbdfc-98cd-4eea-847b-36256d95a95e\") " pod="openshift-config-operator/openshift-config-operator-5777786469-5xzhq" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.738054 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p4692\" (UniqueName: \"kubernetes.io/projected/92837ccf-1e39-495e-bbcb-d3eaafd95d15-kube-api-access-p4692\") pod \"console-64d44f6ddf-cdw7h\" (UID: \"92837ccf-1e39-495e-bbcb-d3eaafd95d15\") " pod="openshift-console/console-64d44f6ddf-cdw7h" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.738082 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfw8n\" (UniqueName: \"kubernetes.io/projected/840b1c0b-8303-40bb-a881-8a974ea23710-kube-api-access-lfw8n\") pod \"controller-manager-65b6cccf98-8pnd7\" (UID: \"840b1c0b-8303-40bb-a881-8a974ea23710\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8pnd7" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.738089 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1a21f262-041c-4938-bf1c-9ba06822ff62-auth-proxy-config\") pod \"machine-approver-54c688565-h5dj4\" (UID: \"1a21f262-041c-4938-bf1c-9ba06822ff62\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-h5dj4" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.738138 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a21f262-041c-4938-bf1c-9ba06822ff62-config\") pod \"machine-approver-54c688565-h5dj4\" (UID: \"1a21f262-041c-4938-bf1c-9ba06822ff62\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-h5dj4" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.738156 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/89670568-cddd-4d5c-9a13-c8e6bdc340aa-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-7qjcm\" (UID: \"89670568-cddd-4d5c-9a13-c8e6bdc340aa\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7qjcm" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.738177 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fbd52e79-1f71-46e5-8170-270ba85e62df-audit-dir\") pod \"apiserver-8596bd845d-pkvvc\" (UID: \"fbd52e79-1f71-46e5-8170-270ba85e62df\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.738197 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/360637cf-82f2-4c3f-8007-8669c23e631c-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-7qxb2\" (UID: \"360637cf-82f2-4c3f-8007-8669c23e631c\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-7qxb2" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.738212 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.738228 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/92837ccf-1e39-495e-bbcb-d3eaafd95d15-oauth-serving-cert\") pod \"console-64d44f6ddf-cdw7h\" (UID: \"92837ccf-1e39-495e-bbcb-d3eaafd95d15\") " pod="openshift-console/console-64d44f6ddf-cdw7h" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.738369 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.738767 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/89670568-cddd-4d5c-9a13-c8e6bdc340aa-tmp\") pod \"cluster-image-registry-operator-86c45576b9-7qjcm\" (UID: \"89670568-cddd-4d5c-9a13-c8e6bdc340aa\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7qjcm" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.738897 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/92837ccf-1e39-495e-bbcb-d3eaafd95d15-oauth-serving-cert\") pod \"console-64d44f6ddf-cdw7h\" (UID: \"92837ccf-1e39-495e-bbcb-d3eaafd95d15\") " pod="openshift-console/console-64d44f6ddf-cdw7h" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.739089 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1872a46a-0e1f-469d-b403-8a1e0805d291-client-ca\") pod \"route-controller-manager-776cdc94d6-lrh8v\" (UID: \"1872a46a-0e1f-469d-b403-8a1e0805d291\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.739139 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/69e82e98-c3d1-4cdd-9657-609e9e9b78d0-images\") pod \"machine-api-operator-755bb95488-tm7d5\" (UID: \"69e82e98-c3d1-4cdd-9657-609e9e9b78d0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-tm7d5" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.739168 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1872a46a-0e1f-469d-b403-8a1e0805d291-tmp\") pod \"route-controller-manager-776cdc94d6-lrh8v\" (UID: \"1872a46a-0e1f-469d-b403-8a1e0805d291\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.739466 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a21f262-041c-4938-bf1c-9ba06822ff62-config\") pod \"machine-approver-54c688565-h5dj4\" (UID: \"1a21f262-041c-4938-bf1c-9ba06822ff62\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-h5dj4" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.739532 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/360637cf-82f2-4c3f-8007-8669c23e631c-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-7qxb2\" (UID: \"360637cf-82f2-4c3f-8007-8669c23e631c\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-7qxb2" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.739729 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/89670568-cddd-4d5c-9a13-c8e6bdc340aa-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-7qjcm\" (UID: \"89670568-cddd-4d5c-9a13-c8e6bdc340aa\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7qjcm" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.740556 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.740778 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1a21f262-041c-4938-bf1c-9ba06822ff62-machine-approver-tls\") pod \"machine-approver-54c688565-h5dj4\" (UID: \"1a21f262-041c-4938-bf1c-9ba06822ff62\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-h5dj4" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.740885 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.741164 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/89670568-cddd-4d5c-9a13-c8e6bdc340aa-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-7qjcm\" (UID: \"89670568-cddd-4d5c-9a13-c8e6bdc340aa\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7qjcm" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.741845 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.742857 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fb139a6e-970e-4662-8bef-8155c86676c4-etcd-client\") pod \"apiserver-9ddfb9f55-v5nx6\" (UID: \"fb139a6e-970e-4662-8bef-8155c86676c4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.742984 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/fb139a6e-970e-4662-8bef-8155c86676c4-image-import-ca\") pod \"apiserver-9ddfb9f55-v5nx6\" (UID: \"fb139a6e-970e-4662-8bef-8155c86676c4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.743420 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.743429 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.750865 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/fb139a6e-970e-4662-8bef-8155c86676c4-audit\") pod \"apiserver-9ddfb9f55-v5nx6\" (UID: \"fb139a6e-970e-4662-8bef-8155c86676c4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.751879 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/92837ccf-1e39-495e-bbcb-d3eaafd95d15-console-serving-cert\") pod \"console-64d44f6ddf-cdw7h\" (UID: \"92837ccf-1e39-495e-bbcb-d3eaafd95d15\") " pod="openshift-console/console-64d44f6ddf-cdw7h" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.752312 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a7df9f2f-5671-4d9d-a30c-e2d504d7d7f1-metrics-tls\") pod \"dns-operator-799b87ffcd-fwfm2\" (UID: \"a7df9f2f-5671-4d9d-a30c-e2d504d7d7f1\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-fwfm2" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.752506 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb139a6e-970e-4662-8bef-8155c86676c4-serving-cert\") pod \"apiserver-9ddfb9f55-v5nx6\" (UID: \"fb139a6e-970e-4662-8bef-8155c86676c4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.752634 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/69e82e98-c3d1-4cdd-9657-609e9e9b78d0-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-tm7d5\" (UID: \"69e82e98-c3d1-4cdd-9657-609e9e9b78d0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-tm7d5" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.752827 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fb139a6e-970e-4662-8bef-8155c86676c4-encryption-config\") pod \"apiserver-9ddfb9f55-v5nx6\" (UID: \"fb139a6e-970e-4662-8bef-8155c86676c4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.752850 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.752964 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/965dbdfc-98cd-4eea-847b-36256d95a95e-serving-cert\") pod \"openshift-config-operator-5777786469-5xzhq\" (UID: \"965dbdfc-98cd-4eea-847b-36256d95a95e\") " pod="openshift-config-operator/openshift-config-operator-5777786469-5xzhq" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.755096 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1872a46a-0e1f-469d-b403-8a1e0805d291-serving-cert\") pod \"route-controller-manager-776cdc94d6-lrh8v\" (UID: \"1872a46a-0e1f-469d-b403-8a1e0805d291\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.760646 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.780229 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.800735 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.820430 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.839330 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/840b1c0b-8303-40bb-a881-8a974ea23710-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-8pnd7\" (UID: \"840b1c0b-8303-40bb-a881-8a974ea23710\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8pnd7" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.839370 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fbd52e79-1f71-46e5-8170-270ba85e62df-etcd-serving-ca\") pod \"apiserver-8596bd845d-pkvvc\" (UID: \"fbd52e79-1f71-46e5-8170-270ba85e62df\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.839400 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/840b1c0b-8303-40bb-a881-8a974ea23710-client-ca\") pod \"controller-manager-65b6cccf98-8pnd7\" (UID: \"840b1c0b-8303-40bb-a881-8a974ea23710\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8pnd7" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.839416 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lfw8n\" (UniqueName: \"kubernetes.io/projected/840b1c0b-8303-40bb-a881-8a974ea23710-kube-api-access-lfw8n\") pod \"controller-manager-65b6cccf98-8pnd7\" (UID: \"840b1c0b-8303-40bb-a881-8a974ea23710\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8pnd7" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.839438 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fbd52e79-1f71-46e5-8170-270ba85e62df-audit-dir\") pod \"apiserver-8596bd845d-pkvvc\" (UID: \"fbd52e79-1f71-46e5-8170-270ba85e62df\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.839466 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/840b1c0b-8303-40bb-a881-8a974ea23710-tmp\") pod \"controller-manager-65b6cccf98-8pnd7\" (UID: \"840b1c0b-8303-40bb-a881-8a974ea23710\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8pnd7" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.839483 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fbd52e79-1f71-46e5-8170-270ba85e62df-etcd-client\") pod \"apiserver-8596bd845d-pkvvc\" (UID: \"fbd52e79-1f71-46e5-8170-270ba85e62df\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.839503 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/840b1c0b-8303-40bb-a881-8a974ea23710-serving-cert\") pod \"controller-manager-65b6cccf98-8pnd7\" (UID: \"840b1c0b-8303-40bb-a881-8a974ea23710\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8pnd7" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.839530 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbd52e79-1f71-46e5-8170-270ba85e62df-serving-cert\") pod \"apiserver-8596bd845d-pkvvc\" (UID: \"fbd52e79-1f71-46e5-8170-270ba85e62df\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.839553 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/840b1c0b-8303-40bb-a881-8a974ea23710-config\") pod \"controller-manager-65b6cccf98-8pnd7\" (UID: \"840b1c0b-8303-40bb-a881-8a974ea23710\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8pnd7" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.839585 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c8czw\" (UniqueName: \"kubernetes.io/projected/fbd52e79-1f71-46e5-8170-270ba85e62df-kube-api-access-c8czw\") pod \"apiserver-8596bd845d-pkvvc\" (UID: \"fbd52e79-1f71-46e5-8170-270ba85e62df\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.839653 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fbd52e79-1f71-46e5-8170-270ba85e62df-encryption-config\") pod \"apiserver-8596bd845d-pkvvc\" (UID: \"fbd52e79-1f71-46e5-8170-270ba85e62df\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.839685 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fbd52e79-1f71-46e5-8170-270ba85e62df-audit-policies\") pod \"apiserver-8596bd845d-pkvvc\" (UID: \"fbd52e79-1f71-46e5-8170-270ba85e62df\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.839721 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbd52e79-1f71-46e5-8170-270ba85e62df-trusted-ca-bundle\") pod \"apiserver-8596bd845d-pkvvc\" (UID: \"fbd52e79-1f71-46e5-8170-270ba85e62df\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.840289 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbd52e79-1f71-46e5-8170-270ba85e62df-trusted-ca-bundle\") pod \"apiserver-8596bd845d-pkvvc\" (UID: \"fbd52e79-1f71-46e5-8170-270ba85e62df\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.841378 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.841398 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/840b1c0b-8303-40bb-a881-8a974ea23710-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-8pnd7\" (UID: \"840b1c0b-8303-40bb-a881-8a974ea23710\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8pnd7" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.842022 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fbd52e79-1f71-46e5-8170-270ba85e62df-etcd-serving-ca\") pod \"apiserver-8596bd845d-pkvvc\" (UID: \"fbd52e79-1f71-46e5-8170-270ba85e62df\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.842940 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fbd52e79-1f71-46e5-8170-270ba85e62df-audit-dir\") pod \"apiserver-8596bd845d-pkvvc\" (UID: \"fbd52e79-1f71-46e5-8170-270ba85e62df\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.843330 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/840b1c0b-8303-40bb-a881-8a974ea23710-tmp\") pod \"controller-manager-65b6cccf98-8pnd7\" (UID: \"840b1c0b-8303-40bb-a881-8a974ea23710\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8pnd7" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.843708 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/840b1c0b-8303-40bb-a881-8a974ea23710-config\") pod \"controller-manager-65b6cccf98-8pnd7\" (UID: \"840b1c0b-8303-40bb-a881-8a974ea23710\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8pnd7" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.843716 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fbd52e79-1f71-46e5-8170-270ba85e62df-audit-policies\") pod \"apiserver-8596bd845d-pkvvc\" (UID: \"fbd52e79-1f71-46e5-8170-270ba85e62df\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.844324 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/840b1c0b-8303-40bb-a881-8a974ea23710-client-ca\") pod \"controller-manager-65b6cccf98-8pnd7\" (UID: \"840b1c0b-8303-40bb-a881-8a974ea23710\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8pnd7" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.846963 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fbd52e79-1f71-46e5-8170-270ba85e62df-etcd-client\") pod \"apiserver-8596bd845d-pkvvc\" (UID: \"fbd52e79-1f71-46e5-8170-270ba85e62df\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.847022 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/840b1c0b-8303-40bb-a881-8a974ea23710-serving-cert\") pod \"controller-manager-65b6cccf98-8pnd7\" (UID: \"840b1c0b-8303-40bb-a881-8a974ea23710\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8pnd7" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.847034 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fbd52e79-1f71-46e5-8170-270ba85e62df-encryption-config\") pod \"apiserver-8596bd845d-pkvvc\" (UID: \"fbd52e79-1f71-46e5-8170-270ba85e62df\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.847252 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbd52e79-1f71-46e5-8170-270ba85e62df-serving-cert\") pod \"apiserver-8596bd845d-pkvvc\" (UID: \"fbd52e79-1f71-46e5-8170-270ba85e62df\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.860831 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.879939 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.895734 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-kr9dh" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.899908 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.902461 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-gsksl"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.911762 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-gsksl" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.912227 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tg5m9"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.912996 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.916598 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-75h8s"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.919285 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tg5m9" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.919745 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.921061 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.928026 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xcz9f"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.928482 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-75h8s" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.931589 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nblg9"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.931738 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xcz9f" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.934519 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bxl82"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.934593 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nblg9" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.937193 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bxl82" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.937689 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rd56f"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.940137 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.940165 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-h2kxl"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.940385 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rd56f" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.943244 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-ckwnh"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.943287 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-fwfm2"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.943299 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.943310 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-2wvch"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.943322 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-tm7d5"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.943325 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-h2kxl" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.943336 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-x7zl6"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.946082 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4jn6q"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.946488 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-x7zl6" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.949288 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jskvf"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.949481 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4jn6q" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.953929 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-h6j6c"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.954085 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jskvf" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.956739 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-b2vgx"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.956885 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-h6j6c" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.958913 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-z44ln"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.959037 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-b2vgx" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.960044 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.960870 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jrtpt"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.960950 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-z44ln" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.963481 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-44sgb"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.963578 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-jrtpt" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.965778 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420370-b586h"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.965976 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-44sgb" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.967948 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-m22vv"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.968107 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-b586h" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.970439 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-ls2zg"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.970529 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-m22vv" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.980261 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.980351 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-cdw7h"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.980383 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-7qxb2"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.980415 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tg5m9"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.980426 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-h2kxl"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.980434 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sbhlq"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.980441 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-t2wgb"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.980449 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-7qjcm"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.980509 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xcz9f"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.980520 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-gsksl"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.980530 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-9z4ll"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.980541 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-t8fbs"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.980494 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-ls2zg" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.980571 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-bfrm9"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.980748 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-v5nx6"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.980795 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rd56f"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.980804 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8xhhx"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.980813 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jrtpt"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.980820 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bxl82"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.980830 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-b2vgx"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.980839 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nblg9"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.980848 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-hgxtj"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.980856 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-z44ln"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.980863 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-5xzhq"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.980871 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-8pnd7"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.980878 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.980887 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-h6j6c"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.980895 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4jn6q"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.980903 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420370-b586h"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.980912 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-ghhd8"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.984063 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-m22vv"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.984092 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-bhnwz"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.984244 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-ghhd8" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.987492 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-75h8s"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.987524 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-ls2zg"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.987536 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-44sgb"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.987548 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jskvf"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.987560 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-ghhd8"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.987570 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-bhnwz"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.987581 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-2rxc6"] Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.987924 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-bhnwz" Dec 08 19:30:57 crc kubenswrapper[5125]: I1208 19:30:57.991892 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2rxc6" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.000173 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.020051 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.072969 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-ckwnh"] Dec 08 19:30:58 crc kubenswrapper[5125]: W1208 19:30:58.084910 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf75614c9_b518_4c59_bd9f_259d9f410e76.slice/crio-36c423666f592d7677353cc14b0c472419cddace4ef0199937ad481f3cb88620 WatchSource:0}: Error finding container 36c423666f592d7677353cc14b0c472419cddace4ef0199937ad481f3cb88620: Status 404 returned error can't find the container with id 36c423666f592d7677353cc14b0c472419cddace4ef0199937ad481f3cb88620 Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.095775 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dtzs\" (UniqueName: \"kubernetes.io/projected/1a21f262-041c-4938-bf1c-9ba06822ff62-kube-api-access-9dtzs\") pod \"machine-approver-54c688565-h5dj4\" (UID: \"1a21f262-041c-4938-bf1c-9ba06822ff62\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-h5dj4" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.114962 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x52dv\" (UniqueName: \"kubernetes.io/projected/89670568-cddd-4d5c-9a13-c8e6bdc340aa-kube-api-access-x52dv\") pod \"cluster-image-registry-operator-86c45576b9-7qjcm\" (UID: \"89670568-cddd-4d5c-9a13-c8e6bdc340aa\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7qjcm" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.133316 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbgtx\" (UniqueName: \"kubernetes.io/projected/360637cf-82f2-4c3f-8007-8669c23e631c-kube-api-access-nbgtx\") pod \"authentication-operator-7f5c659b84-7qxb2\" (UID: \"360637cf-82f2-4c3f-8007-8669c23e631c\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-7qxb2" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.153995 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4pmd\" (UniqueName: \"kubernetes.io/projected/69e82e98-c3d1-4cdd-9657-609e9e9b78d0-kube-api-access-w4pmd\") pod \"machine-api-operator-755bb95488-tm7d5\" (UID: \"69e82e98-c3d1-4cdd-9657-609e9e9b78d0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-tm7d5" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.159793 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-h5dj4" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.177753 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-22br8\" (UniqueName: \"kubernetes.io/projected/cdb7a298-ac30-410b-9ab7-a060a428e88b-kube-api-access-22br8\") pod \"oauth-openshift-66458b6674-2wvch\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.192140 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-7qxb2" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.197594 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/89670568-cddd-4d5c-9a13-c8e6bdc340aa-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-7qjcm\" (UID: \"89670568-cddd-4d5c-9a13-c8e6bdc340aa\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7qjcm" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.201672 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-tm7d5" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.216458 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljkk8\" (UniqueName: \"kubernetes.io/projected/fb139a6e-970e-4662-8bef-8155c86676c4-kube-api-access-ljkk8\") pod \"apiserver-9ddfb9f55-v5nx6\" (UID: \"fb139a6e-970e-4662-8bef-8155c86676c4\") " pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.217745 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.234680 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkjf7\" (UniqueName: \"kubernetes.io/projected/a7df9f2f-5671-4d9d-a30c-e2d504d7d7f1-kube-api-access-tkjf7\") pod \"dns-operator-799b87ffcd-fwfm2\" (UID: \"a7df9f2f-5671-4d9d-a30c-e2d504d7d7f1\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-fwfm2" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.254112 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-fwfm2" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.257071 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrww7\" (UniqueName: \"kubernetes.io/projected/965dbdfc-98cd-4eea-847b-36256d95a95e-kube-api-access-mrww7\") pod \"openshift-config-operator-5777786469-5xzhq\" (UID: \"965dbdfc-98cd-4eea-847b-36256d95a95e\") " pod="openshift-config-operator/openshift-config-operator-5777786469-5xzhq" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.263132 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7qjcm" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.265957 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-h5dj4" event={"ID":"1a21f262-041c-4938-bf1c-9ba06822ff62","Type":"ContainerStarted","Data":"71895ada1052c060b3797d23574341b8709d3cefddedd96d941e630eb86154ee"} Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.266615 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-ckwnh" event={"ID":"f75614c9-b518-4c59-bd9f-259d9f410e76","Type":"ContainerStarted","Data":"36c423666f592d7677353cc14b0c472419cddace4ef0199937ad481f3cb88620"} Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.269969 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.276358 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4692\" (UniqueName: \"kubernetes.io/projected/92837ccf-1e39-495e-bbcb-d3eaafd95d15-kube-api-access-p4692\") pod \"console-64d44f6ddf-cdw7h\" (UID: \"92837ccf-1e39-495e-bbcb-d3eaafd95d15\") " pod="openshift-console/console-64d44f6ddf-cdw7h" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.276535 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-5xzhq" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.295457 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4krl\" (UniqueName: \"kubernetes.io/projected/1872a46a-0e1f-469d-b403-8a1e0805d291-kube-api-access-t4krl\") pod \"route-controller-manager-776cdc94d6-lrh8v\" (UID: \"1872a46a-0e1f-469d-b403-8a1e0805d291\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.317514 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8czw\" (UniqueName: \"kubernetes.io/projected/fbd52e79-1f71-46e5-8170-270ba85e62df-kube-api-access-c8czw\") pod \"apiserver-8596bd845d-pkvvc\" (UID: \"fbd52e79-1f71-46e5-8170-270ba85e62df\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.341510 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.345027 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfw8n\" (UniqueName: \"kubernetes.io/projected/840b1c0b-8303-40bb-a881-8a974ea23710-kube-api-access-lfw8n\") pod \"controller-manager-65b6cccf98-8pnd7\" (UID: \"840b1c0b-8303-40bb-a881-8a974ea23710\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8pnd7" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.360374 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.380271 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.400931 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.420364 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.449170 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.462989 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.481107 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.491846 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-cdw7h" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.508063 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.523358 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.540426 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.541332 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.564534 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.573760 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-7qjcm"] Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.580565 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.596213 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-8pnd7" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.600126 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.603071 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.620006 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.644039 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.659791 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 08 19:30:58 crc kubenswrapper[5125]: W1208 19:30:58.670412 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89670568_cddd_4d5c_9a13_c8e6bdc340aa.slice/crio-fd99c7a3b4b1024e45c11a64db62741a5abdd8f01339fe7f9c64a4613dc63be3 WatchSource:0}: Error finding container fd99c7a3b4b1024e45c11a64db62741a5abdd8f01339fe7f9c64a4613dc63be3: Status 404 returned error can't find the container with id fd99c7a3b4b1024e45c11a64db62741a5abdd8f01339fe7f9c64a4613dc63be3 Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.681869 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.699734 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.719789 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.760843 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.763413 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.766895 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.767536 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.780735 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.808966 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.821118 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-5xzhq"] Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.821403 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-fwfm2"] Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.825684 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.827097 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-v5nx6"] Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.837997 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-7qxb2"] Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.838568 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-tm7d5"] Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.840190 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.859432 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-2wvch"] Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.860523 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.881174 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.902316 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.906720 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-cdw7h"] Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.920197 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.920306 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-8pnd7"] Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.938475 5125 request.go:752] "Waited before sending request" delay="1.000680868s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.940314 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.942569 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc"] Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.946921 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v"] Dec 08 19:30:58 crc kubenswrapper[5125]: W1208 19:30:58.955634 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod840b1c0b_8303_40bb_a881_8a974ea23710.slice/crio-dcff60cad2ac06a50c75438297eea55420905c4e3e547dbf70d5be6064a27f4a WatchSource:0}: Error finding container dcff60cad2ac06a50c75438297eea55420905c4e3e547dbf70d5be6064a27f4a: Status 404 returned error can't find the container with id dcff60cad2ac06a50c75438297eea55420905c4e3e547dbf70d5be6064a27f4a Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.960553 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 08 19:30:58 crc kubenswrapper[5125]: W1208 19:30:58.961364 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbd52e79_1f71_46e5_8170_270ba85e62df.slice/crio-4d6aadfeb87997a818015b6d5ecb18e4f95a6a76b663ddc6b25d42ff5362156f WatchSource:0}: Error finding container 4d6aadfeb87997a818015b6d5ecb18e4f95a6a76b663ddc6b25d42ff5362156f: Status 404 returned error can't find the container with id 4d6aadfeb87997a818015b6d5ecb18e4f95a6a76b663ddc6b25d42ff5362156f Dec 08 19:30:58 crc kubenswrapper[5125]: W1208 19:30:58.968737 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1872a46a_0e1f_469d_b403_8a1e0805d291.slice/crio-d41c7094337302c1a1d94ec77faa9764ac41bbbdfb78f24b8dd72ecee6faefb4 WatchSource:0}: Error finding container d41c7094337302c1a1d94ec77faa9764ac41bbbdfb78f24b8dd72ecee6faefb4: Status 404 returned error can't find the container with id d41c7094337302c1a1d94ec77faa9764ac41bbbdfb78f24b8dd72ecee6faefb4 Dec 08 19:30:58 crc kubenswrapper[5125]: I1208 19:30:58.980965 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.000704 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.020645 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.039752 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.065360 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.084645 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.106018 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.120570 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.140533 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.162065 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.179943 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.199831 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.227429 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.239945 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.262667 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.275760 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" event={"ID":"cdb7a298-ac30-410b-9ab7-a060a428e88b","Type":"ContainerStarted","Data":"ecbace5f6958b3269162e07d5ed74ede4f32ab7a84e9902a45c2dbfbae19f17d"} Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.280164 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-ckwnh" event={"ID":"f75614c9-b518-4c59-bd9f-259d9f410e76","Type":"ContainerStarted","Data":"e0bfaec3f096b18ee544564f7a5a2a779aa35d9cab34e153def36b6341737ca2"} Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.280780 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.281811 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v" event={"ID":"1872a46a-0e1f-469d-b403-8a1e0805d291","Type":"ContainerStarted","Data":"3f55efd52ee79979c5783b52c59de168693467ffeb12975c2ed4136ae6015879"} Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.281835 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v" event={"ID":"1872a46a-0e1f-469d-b403-8a1e0805d291","Type":"ContainerStarted","Data":"d41c7094337302c1a1d94ec77faa9764ac41bbbdfb78f24b8dd72ecee6faefb4"} Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.283031 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.285121 5125 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-lrh8v container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.285174 5125 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v" podUID="1872a46a-0e1f-469d-b403-8a1e0805d291" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.297930 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-7qxb2" event={"ID":"360637cf-82f2-4c3f-8007-8669c23e631c","Type":"ContainerStarted","Data":"635de72540f8020d7d26559222ddcf425db08688bcc05b69b726e3896df0faa4"} Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.297976 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-7qxb2" event={"ID":"360637cf-82f2-4c3f-8007-8669c23e631c","Type":"ContainerStarted","Data":"c65b765b82d9ca3f727acb4e224a9d7c538d5877d42d4775e8a41bb9090afe1c"} Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.299863 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" event={"ID":"fb139a6e-970e-4662-8bef-8155c86676c4","Type":"ContainerStarted","Data":"02cb20005dfc56e3d52b76a3fef13377e6267a5a43e55e4c9d609dfd5617e5fd"} Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.299913 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" event={"ID":"fb139a6e-970e-4662-8bef-8155c86676c4","Type":"ContainerStarted","Data":"0508f684a39bb58749f291fde1605f2bcd4b8b8cce049173e66819ccce409684"} Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.300487 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.307085 5125 generic.go:358] "Generic (PLEG): container finished" podID="965dbdfc-98cd-4eea-847b-36256d95a95e" containerID="0d74fe24cdbfcae08fdb4db903072f3b0a287080c81bce0f78accde8cffd6b53" exitCode=0 Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.307172 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-5xzhq" event={"ID":"965dbdfc-98cd-4eea-847b-36256d95a95e","Type":"ContainerDied","Data":"0d74fe24cdbfcae08fdb4db903072f3b0a287080c81bce0f78accde8cffd6b53"} Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.307205 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-5xzhq" event={"ID":"965dbdfc-98cd-4eea-847b-36256d95a95e","Type":"ContainerStarted","Data":"1262303b0e3f41e4518b5936ee16928e50dd06b8fa07e5381aff359bf5ad5022"} Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.311835 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-h5dj4" event={"ID":"1a21f262-041c-4938-bf1c-9ba06822ff62","Type":"ContainerStarted","Data":"076cfa3f05ff05942ef8b7e3d2cbf7309a312f7967ccfb6c2ef881325974790c"} Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.316043 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" event={"ID":"fbd52e79-1f71-46e5-8170-270ba85e62df","Type":"ContainerStarted","Data":"4d6aadfeb87997a818015b6d5ecb18e4f95a6a76b663ddc6b25d42ff5362156f"} Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.317620 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-fwfm2" event={"ID":"a7df9f2f-5671-4d9d-a30c-e2d504d7d7f1","Type":"ContainerStarted","Data":"417913bf8865ee77d7deebef07b74c870ab350373df84ad8112ff2fcca011539"} Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.320817 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.324018 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-tm7d5" event={"ID":"69e82e98-c3d1-4cdd-9657-609e9e9b78d0","Type":"ContainerStarted","Data":"b3e910906ba2f3d9a33c3f451a6250440593dcbac7d3532431d9029840fbfbf8"} Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.324057 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-tm7d5" event={"ID":"69e82e98-c3d1-4cdd-9657-609e9e9b78d0","Type":"ContainerStarted","Data":"2eec0bf1151687648ac3d7ec5d94fca13894fcb4363a4c42334998f805ddf80e"} Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.331308 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-cdw7h" event={"ID":"92837ccf-1e39-495e-bbcb-d3eaafd95d15","Type":"ContainerStarted","Data":"a2384c343baf95a13f61fbb37bc2065cf967dc716565661d54eeed12b2ebf02e"} Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.331761 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-cdw7h" event={"ID":"92837ccf-1e39-495e-bbcb-d3eaafd95d15","Type":"ContainerStarted","Data":"2f89c5a51efbb6a1f5d04ccef28e642aa903cec15e3d84ac668573aadc9b48f7"} Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.334898 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7qjcm" event={"ID":"89670568-cddd-4d5c-9a13-c8e6bdc340aa","Type":"ContainerStarted","Data":"d5c0232d84bc41b8302a372383326bb42593e9186d240453529bead7dbe864c3"} Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.334934 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7qjcm" event={"ID":"89670568-cddd-4d5c-9a13-c8e6bdc340aa","Type":"ContainerStarted","Data":"fd99c7a3b4b1024e45c11a64db62741a5abdd8f01339fe7f9c64a4613dc63be3"} Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.342288 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-8pnd7" event={"ID":"840b1c0b-8303-40bb-a881-8a974ea23710","Type":"ContainerStarted","Data":"a91afdad36df325d6f4d1fd5450965f5cc07adf21d37118c50ac52b0143bd097"} Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.342337 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-8pnd7" event={"ID":"840b1c0b-8303-40bb-a881-8a974ea23710","Type":"ContainerStarted","Data":"dcff60cad2ac06a50c75438297eea55420905c4e3e547dbf70d5be6064a27f4a"} Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.342510 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-8pnd7" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.342597 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.345677 5125 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-8pnd7 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.345744 5125 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-8pnd7" podUID="840b1c0b-8303-40bb-a881-8a974ea23710" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.361239 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.380867 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.400279 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.422319 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.440891 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.461720 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.480351 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.500470 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.523383 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.540143 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.560977 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.580332 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.601128 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.620668 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.640655 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.661406 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.680911 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.700570 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.720401 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.740383 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.760505 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.780227 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.800468 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.822045 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.842549 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.863221 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.880750 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.904217 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.921816 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.939248 5125 request.go:752] "Waited before sending request" delay="1.947132382s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&limit=500&resourceVersion=0" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.941227 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 08 19:30:59 crc kubenswrapper[5125]: I1208 19:30:59.960672 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.025879 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.041134 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.082685 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn7jw\" (UniqueName: \"kubernetes.io/projected/4d6421d4-f996-4c24-88de-d0cd3aee5aec-kube-api-access-tn7jw\") pod \"etcd-operator-69b85846b6-bfrm9\" (UID: \"4d6421d4-f996-4c24-88de-d0cd3aee5aec\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bfrm9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.082843 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2c83f90-fa38-4d74-a07c-8cb71f20c3eb-config\") pod \"openshift-kube-scheduler-operator-54f497555d-9z4ll\" (UID: \"c2c83f90-fa38-4d74-a07c-8cb71f20c3eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-9z4ll" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.082927 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/51fe67ff-4e90-4add-8447-58edc3e3d117-ca-trust-extracted\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.082987 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d5b91de-c016-4a44-aab6-910f036d51ae-service-ca-bundle\") pod \"router-default-68cf44c8b8-kr9dh\" (UID: \"3d5b91de-c016-4a44-aab6-910f036d51ae\") " pod="openshift-ingress/router-default-68cf44c8b8-kr9dh" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.083021 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2c83f90-fa38-4d74-a07c-8cb71f20c3eb-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-9z4ll\" (UID: \"c2c83f90-fa38-4d74-a07c-8cb71f20c3eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-9z4ll" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.083199 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/dd5a81f9-3ca2-4c34-9160-5db0dd237f3c-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-8xhhx\" (UID: \"dd5a81f9-3ca2-4c34-9160-5db0dd237f3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8xhhx" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.083263 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3d5b91de-c016-4a44-aab6-910f036d51ae-default-certificate\") pod \"router-default-68cf44c8b8-kr9dh\" (UID: \"3d5b91de-c016-4a44-aab6-910f036d51ae\") " pod="openshift-ingress/router-default-68cf44c8b8-kr9dh" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.083308 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/42d215e6-741b-4710-a7e9-b7944f744f0b-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-sbhlq\" (UID: \"42d215e6-741b-4710-a7e9-b7944f744f0b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sbhlq" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.083329 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3d5b91de-c016-4a44-aab6-910f036d51ae-stats-auth\") pod \"router-default-68cf44c8b8-kr9dh\" (UID: \"3d5b91de-c016-4a44-aab6-910f036d51ae\") " pod="openshift-ingress/router-default-68cf44c8b8-kr9dh" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.083403 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/51fe67ff-4e90-4add-8447-58edc3e3d117-registry-tls\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.083459 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq4xz\" (UniqueName: \"kubernetes.io/projected/42d215e6-741b-4710-a7e9-b7944f744f0b-kube-api-access-wq4xz\") pod \"cluster-samples-operator-6b564684c8-sbhlq\" (UID: \"42d215e6-741b-4710-a7e9-b7944f744f0b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sbhlq" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.083504 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/81694063-8439-4d15-8673-30e88676f33e-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-t2wgb\" (UID: \"81694063-8439-4d15-8673-30e88676f33e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-t2wgb" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.083525 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfxkm\" (UniqueName: \"kubernetes.io/projected/81694063-8439-4d15-8673-30e88676f33e-kube-api-access-zfxkm\") pod \"openshift-controller-manager-operator-686468bdd5-t2wgb\" (UID: \"81694063-8439-4d15-8673-30e88676f33e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-t2wgb" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.083545 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c2c83f90-fa38-4d74-a07c-8cb71f20c3eb-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-9z4ll\" (UID: \"c2c83f90-fa38-4d74-a07c-8cb71f20c3eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-9z4ll" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.083694 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/51fe67ff-4e90-4add-8447-58edc3e3d117-registry-certificates\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.083726 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51fe67ff-4e90-4add-8447-58edc3e3d117-trusted-ca\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.083748 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d6421d4-f996-4c24-88de-d0cd3aee5aec-config\") pod \"etcd-operator-69b85846b6-bfrm9\" (UID: \"4d6421d4-f996-4c24-88de-d0cd3aee5aec\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bfrm9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.083825 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd5a81f9-3ca2-4c34-9160-5db0dd237f3c-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-8xhhx\" (UID: \"dd5a81f9-3ca2-4c34-9160-5db0dd237f3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8xhhx" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.083898 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/51fe67ff-4e90-4add-8447-58edc3e3d117-bound-sa-token\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.083922 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d6421d4-f996-4c24-88de-d0cd3aee5aec-serving-cert\") pod \"etcd-operator-69b85846b6-bfrm9\" (UID: \"4d6421d4-f996-4c24-88de-d0cd3aee5aec\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bfrm9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.083966 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.083991 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd5a81f9-3ca2-4c34-9160-5db0dd237f3c-config\") pod \"kube-controller-manager-operator-69d5f845f8-8xhhx\" (UID: \"dd5a81f9-3ca2-4c34-9160-5db0dd237f3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8xhhx" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.084070 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d5b91de-c016-4a44-aab6-910f036d51ae-metrics-certs\") pod \"router-default-68cf44c8b8-kr9dh\" (UID: \"3d5b91de-c016-4a44-aab6-910f036d51ae\") " pod="openshift-ingress/router-default-68cf44c8b8-kr9dh" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.084097 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbcng\" (UniqueName: \"kubernetes.io/projected/3d5b91de-c016-4a44-aab6-910f036d51ae-kube-api-access-wbcng\") pod \"router-default-68cf44c8b8-kr9dh\" (UID: \"3d5b91de-c016-4a44-aab6-910f036d51ae\") " pod="openshift-ingress/router-default-68cf44c8b8-kr9dh" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.084114 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h2vx\" (UniqueName: \"kubernetes.io/projected/c46131b3-44f8-4a83-a357-31ca0197d1be-kube-api-access-8h2vx\") pod \"downloads-747b44746d-t8fbs\" (UID: \"c46131b3-44f8-4a83-a357-31ca0197d1be\") " pod="openshift-console/downloads-747b44746d-t8fbs" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.084158 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81694063-8439-4d15-8673-30e88676f33e-config\") pod \"openshift-controller-manager-operator-686468bdd5-t2wgb\" (UID: \"81694063-8439-4d15-8673-30e88676f33e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-t2wgb" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.084188 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fwnk\" (UniqueName: \"kubernetes.io/projected/51fe67ff-4e90-4add-8447-58edc3e3d117-kube-api-access-8fwnk\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.084203 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4d6421d4-f996-4c24-88de-d0cd3aee5aec-tmp-dir\") pod \"etcd-operator-69b85846b6-bfrm9\" (UID: \"4d6421d4-f996-4c24-88de-d0cd3aee5aec\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bfrm9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.084218 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81694063-8439-4d15-8673-30e88676f33e-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-t2wgb\" (UID: \"81694063-8439-4d15-8673-30e88676f33e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-t2wgb" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.084233 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2c83f90-fa38-4d74-a07c-8cb71f20c3eb-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-9z4ll\" (UID: \"c2c83f90-fa38-4d74-a07c-8cb71f20c3eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-9z4ll" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.084265 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4d6421d4-f996-4c24-88de-d0cd3aee5aec-etcd-ca\") pod \"etcd-operator-69b85846b6-bfrm9\" (UID: \"4d6421d4-f996-4c24-88de-d0cd3aee5aec\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bfrm9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.084327 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4d6421d4-f996-4c24-88de-d0cd3aee5aec-etcd-service-ca\") pod \"etcd-operator-69b85846b6-bfrm9\" (UID: \"4d6421d4-f996-4c24-88de-d0cd3aee5aec\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bfrm9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.084361 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4d6421d4-f996-4c24-88de-d0cd3aee5aec-etcd-client\") pod \"etcd-operator-69b85846b6-bfrm9\" (UID: \"4d6421d4-f996-4c24-88de-d0cd3aee5aec\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bfrm9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.084382 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd5a81f9-3ca2-4c34-9160-5db0dd237f3c-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-8xhhx\" (UID: \"dd5a81f9-3ca2-4c34-9160-5db0dd237f3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8xhhx" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.084439 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/51fe67ff-4e90-4add-8447-58edc3e3d117-installation-pull-secrets\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:00 crc kubenswrapper[5125]: E1208 19:31:00.085950 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:00.58593538 +0000 UTC m=+117.356425654 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.185722 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.185906 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/601959f9-7e12-4ca5-9856-f962f3929720-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-nblg9\" (UID: \"601959f9-7e12-4ca5-9856-f962f3929720\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nblg9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.185931 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2c83f90-fa38-4d74-a07c-8cb71f20c3eb-config\") pod \"openshift-kube-scheduler-operator-54f497555d-9z4ll\" (UID: \"c2c83f90-fa38-4d74-a07c-8cb71f20c3eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-9z4ll" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.185979 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1-plugins-dir\") pod \"csi-hostpathplugin-jrtpt\" (UID: \"c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1\") " pod="hostpath-provisioner/csi-hostpathplugin-jrtpt" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186004 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsb4h\" (UniqueName: \"kubernetes.io/projected/ce71fc42-1327-4ce2-8753-feda68799f6c-kube-api-access-nsb4h\") pod \"kube-storage-version-migrator-operator-565b79b866-bxl82\" (UID: \"ce71fc42-1327-4ce2-8753-feda68799f6c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bxl82" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186020 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2c83f90-fa38-4d74-a07c-8cb71f20c3eb-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-9z4ll\" (UID: \"c2c83f90-fa38-4d74-a07c-8cb71f20c3eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-9z4ll" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186036 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c52381c-f0cb-4cf1-992d-60d930ba7d00-trusted-ca\") pod \"console-operator-67c89758df-h2kxl\" (UID: \"1c52381c-f0cb-4cf1-992d-60d930ba7d00\") " pod="openshift-console-operator/console-operator-67c89758df-h2kxl" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186051 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm78l\" (UniqueName: \"kubernetes.io/projected/1ba9072a-064d-4d53-b64b-7315a955f22f-kube-api-access-mm78l\") pod \"olm-operator-5cdf44d969-m22vv\" (UID: \"1ba9072a-064d-4d53-b64b-7315a955f22f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-m22vv" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186065 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ea8b2aeb-7cb2-4d20-920d-4c97bef6a2fd-webhook-cert\") pod \"packageserver-7d4fc7d867-jskvf\" (UID: \"ea8b2aeb-7cb2-4d20-920d-4c97bef6a2fd\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jskvf" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186080 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7vmc\" (UniqueName: \"kubernetes.io/projected/839fbd69-7068-47fa-94aa-8af954d8cbc9-kube-api-access-x7vmc\") pod \"package-server-manager-77f986bd66-4jn6q\" (UID: \"839fbd69-7068-47fa-94aa-8af954d8cbc9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4jn6q" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186096 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02227633-e316-43f5-a4f7-9b77a76f30d9-serving-cert\") pod \"service-ca-operator-5b9c976747-z44ln\" (UID: \"02227633-e316-43f5-a4f7-9b77a76f30d9\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-z44ln" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186125 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/dd5a81f9-3ca2-4c34-9160-5db0dd237f3c-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-8xhhx\" (UID: \"dd5a81f9-3ca2-4c34-9160-5db0dd237f3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8xhhx" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186160 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3d5b91de-c016-4a44-aab6-910f036d51ae-default-certificate\") pod \"router-default-68cf44c8b8-kr9dh\" (UID: \"3d5b91de-c016-4a44-aab6-910f036d51ae\") " pod="openshift-ingress/router-default-68cf44c8b8-kr9dh" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186174 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhdvt\" (UniqueName: \"kubernetes.io/projected/c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1-kube-api-access-fhdvt\") pod \"csi-hostpathplugin-jrtpt\" (UID: \"c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1\") " pod="hostpath-provisioner/csi-hostpathplugin-jrtpt" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186188 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e6991c6c-a51b-4cb6-a726-99f42a49e693-signing-cabundle\") pod \"service-ca-74545575db-h6j6c\" (UID: \"e6991c6c-a51b-4cb6-a726-99f42a49e693\") " pod="openshift-service-ca/service-ca-74545575db-h6j6c" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186205 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6h6l\" (UniqueName: \"kubernetes.io/projected/c839200b-2680-46bc-bcfe-30b5dd4e5d03-kube-api-access-s6h6l\") pod \"machine-config-server-2rxc6\" (UID: \"c839200b-2680-46bc-bcfe-30b5dd4e5d03\") " pod="openshift-machine-config-operator/machine-config-server-2rxc6" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186220 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2eb0798c-5c61-48ac-bf09-2f21642e8e53-metrics-tls\") pod \"dns-default-bhnwz\" (UID: \"2eb0798c-5c61-48ac-bf09-2f21642e8e53\") " pod="openshift-dns/dns-default-bhnwz" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186234 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1ba9072a-064d-4d53-b64b-7315a955f22f-srv-cert\") pod \"olm-operator-5cdf44d969-m22vv\" (UID: \"1ba9072a-064d-4d53-b64b-7315a955f22f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-m22vv" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186247 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e433cbb2-0dab-4949-93f4-beb4675b4117-tmp-dir\") pod \"kube-apiserver-operator-575994946d-xcz9f\" (UID: \"e433cbb2-0dab-4949-93f4-beb4675b4117\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xcz9f" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186271 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9272dafd-6843-41b9-bff8-998f3fd23d33-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-gsksl\" (UID: \"9272dafd-6843-41b9-bff8-998f3fd23d33\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-gsksl" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186294 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wq4xz\" (UniqueName: \"kubernetes.io/projected/42d215e6-741b-4710-a7e9-b7944f744f0b-kube-api-access-wq4xz\") pod \"cluster-samples-operator-6b564684c8-sbhlq\" (UID: \"42d215e6-741b-4710-a7e9-b7944f744f0b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sbhlq" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186307 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfb8w\" (UniqueName: \"kubernetes.io/projected/e6991c6c-a51b-4cb6-a726-99f42a49e693-kube-api-access-zfb8w\") pod \"service-ca-74545575db-h6j6c\" (UID: \"e6991c6c-a51b-4cb6-a726-99f42a49e693\") " pod="openshift-service-ca/service-ca-74545575db-h6j6c" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186322 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whx82\" (UniqueName: \"kubernetes.io/projected/d09591c3-30e3-44ab-88d3-91833456f731-kube-api-access-whx82\") pod \"ingress-operator-6b9cb4dbcf-rd56f\" (UID: \"d09591c3-30e3-44ab-88d3-91833456f731\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rd56f" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186339 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c2c83f90-fa38-4d74-a07c-8cb71f20c3eb-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-9z4ll\" (UID: \"c2c83f90-fa38-4d74-a07c-8cb71f20c3eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-9z4ll" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186358 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd5a81f9-3ca2-4c34-9160-5db0dd237f3c-config\") pod \"kube-controller-manager-operator-69d5f845f8-8xhhx\" (UID: \"dd5a81f9-3ca2-4c34-9160-5db0dd237f3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8xhhx" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186382 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/51fe67ff-4e90-4add-8447-58edc3e3d117-registry-certificates\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186398 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51fe67ff-4e90-4add-8447-58edc3e3d117-trusted-ca\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186413 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81694063-8439-4d15-8673-30e88676f33e-config\") pod \"openshift-controller-manager-operator-686468bdd5-t2wgb\" (UID: \"81694063-8439-4d15-8673-30e88676f33e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-t2wgb" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186428 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ea8b2aeb-7cb2-4d20-920d-4c97bef6a2fd-tmpfs\") pod \"packageserver-7d4fc7d867-jskvf\" (UID: \"ea8b2aeb-7cb2-4d20-920d-4c97bef6a2fd\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jskvf" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186443 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7hc5\" (UniqueName: \"kubernetes.io/projected/02227633-e316-43f5-a4f7-9b77a76f30d9-kube-api-access-z7hc5\") pod \"service-ca-operator-5b9c976747-z44ln\" (UID: \"02227633-e316-43f5-a4f7-9b77a76f30d9\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-z44ln" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186476 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd5a81f9-3ca2-4c34-9160-5db0dd237f3c-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-8xhhx\" (UID: \"dd5a81f9-3ca2-4c34-9160-5db0dd237f3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8xhhx" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186490 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d5b91de-c016-4a44-aab6-910f036d51ae-metrics-certs\") pod \"router-default-68cf44c8b8-kr9dh\" (UID: \"3d5b91de-c016-4a44-aab6-910f036d51ae\") " pod="openshift-ingress/router-default-68cf44c8b8-kr9dh" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186516 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d6421d4-f996-4c24-88de-d0cd3aee5aec-serving-cert\") pod \"etcd-operator-69b85846b6-bfrm9\" (UID: \"4d6421d4-f996-4c24-88de-d0cd3aee5aec\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bfrm9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186530 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1-socket-dir\") pod \"csi-hostpathplugin-jrtpt\" (UID: \"c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1\") " pod="hostpath-provisioner/csi-hostpathplugin-jrtpt" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186546 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7d3d93c9-073e-4463-ad22-0dc846df2d84-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-x7zl6\" (UID: \"7d3d93c9-073e-4463-ad22-0dc846df2d84\") " pod="openshift-multus/cni-sysctl-allowlist-ds-x7zl6" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186568 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c52381c-f0cb-4cf1-992d-60d930ba7d00-serving-cert\") pod \"console-operator-67c89758df-h2kxl\" (UID: \"1c52381c-f0cb-4cf1-992d-60d930ba7d00\") " pod="openshift-console-operator/console-operator-67c89758df-h2kxl" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186582 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2eb0798c-5c61-48ac-bf09-2f21642e8e53-config-volume\") pod \"dns-default-bhnwz\" (UID: \"2eb0798c-5c61-48ac-bf09-2f21642e8e53\") " pod="openshift-dns/dns-default-bhnwz" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186597 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7d3d93c9-073e-4463-ad22-0dc846df2d84-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-x7zl6\" (UID: \"7d3d93c9-073e-4463-ad22-0dc846df2d84\") " pod="openshift-multus/cni-sysctl-allowlist-ds-x7zl6" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186641 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkjnw\" (UniqueName: \"kubernetes.io/projected/601959f9-7e12-4ca5-9856-f962f3929720-kube-api-access-qkjnw\") pod \"control-plane-machine-set-operator-75ffdb6fcd-nblg9\" (UID: \"601959f9-7e12-4ca5-9856-f962f3929720\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nblg9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186657 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8fwnk\" (UniqueName: \"kubernetes.io/projected/51fe67ff-4e90-4add-8447-58edc3e3d117-kube-api-access-8fwnk\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186672 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75zwt\" (UniqueName: \"kubernetes.io/projected/1c52381c-f0cb-4cf1-992d-60d930ba7d00-kube-api-access-75zwt\") pod \"console-operator-67c89758df-h2kxl\" (UID: \"1c52381c-f0cb-4cf1-992d-60d930ba7d00\") " pod="openshift-console-operator/console-operator-67c89758df-h2kxl" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186689 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4d6421d4-f996-4c24-88de-d0cd3aee5aec-etcd-ca\") pod \"etcd-operator-69b85846b6-bfrm9\" (UID: \"4d6421d4-f996-4c24-88de-d0cd3aee5aec\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bfrm9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186707 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d09591c3-30e3-44ab-88d3-91833456f731-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-rd56f\" (UID: \"d09591c3-30e3-44ab-88d3-91833456f731\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rd56f" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186723 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/77083b49-6a76-42e1-9f35-4b34306c23d3-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-75h8s\" (UID: \"77083b49-6a76-42e1-9f35-4b34306c23d3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-75h8s" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186742 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd5a81f9-3ca2-4c34-9160-5db0dd237f3c-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-8xhhx\" (UID: \"dd5a81f9-3ca2-4c34-9160-5db0dd237f3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8xhhx" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186757 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1-registration-dir\") pod \"csi-hostpathplugin-jrtpt\" (UID: \"c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1\") " pod="hostpath-provisioner/csi-hostpathplugin-jrtpt" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186776 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4d6421d4-f996-4c24-88de-d0cd3aee5aec-etcd-service-ca\") pod \"etcd-operator-69b85846b6-bfrm9\" (UID: \"4d6421d4-f996-4c24-88de-d0cd3aee5aec\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bfrm9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186791 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4d6421d4-f996-4c24-88de-d0cd3aee5aec-etcd-client\") pod \"etcd-operator-69b85846b6-bfrm9\" (UID: \"4d6421d4-f996-4c24-88de-d0cd3aee5aec\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bfrm9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186807 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/24b10400-a42c-4ba4-a4fe-37c3ba5017ed-tmpfs\") pod \"catalog-operator-75ff9f647d-b2vgx\" (UID: \"24b10400-a42c-4ba4-a4fe-37c3ba5017ed\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-b2vgx" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186824 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/51fe67ff-4e90-4add-8447-58edc3e3d117-installation-pull-secrets\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186841 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxbf6\" (UniqueName: \"kubernetes.io/projected/8e26b397-18c5-4b0e-a483-943460e35c11-kube-api-access-lxbf6\") pod \"multus-admission-controller-69db94689b-ls2zg\" (UID: \"8e26b397-18c5-4b0e-a483-943460e35c11\") " pod="openshift-multus/multus-admission-controller-69db94689b-ls2zg" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186856 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz9vq\" (UniqueName: \"kubernetes.io/projected/9272dafd-6843-41b9-bff8-998f3fd23d33-kube-api-access-hz9vq\") pod \"machine-config-controller-f9cdd68f7-gsksl\" (UID: \"9272dafd-6843-41b9-bff8-998f3fd23d33\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-gsksl" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186871 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d09591c3-30e3-44ab-88d3-91833456f731-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-rd56f\" (UID: \"d09591c3-30e3-44ab-88d3-91833456f731\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rd56f" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186905 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c839200b-2680-46bc-bcfe-30b5dd4e5d03-certs\") pod \"machine-config-server-2rxc6\" (UID: \"c839200b-2680-46bc-bcfe-30b5dd4e5d03\") " pod="openshift-machine-config-operator/machine-config-server-2rxc6" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186920 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9dgl\" (UniqueName: \"kubernetes.io/projected/ea8b2aeb-7cb2-4d20-920d-4c97bef6a2fd-kube-api-access-g9dgl\") pod \"packageserver-7d4fc7d867-jskvf\" (UID: \"ea8b2aeb-7cb2-4d20-920d-4c97bef6a2fd\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jskvf" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186934 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/7d3d93c9-073e-4463-ad22-0dc846df2d84-ready\") pod \"cni-sysctl-allowlist-ds-x7zl6\" (UID: \"7d3d93c9-073e-4463-ad22-0dc846df2d84\") " pod="openshift-multus/cni-sysctl-allowlist-ds-x7zl6" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186948 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e433cbb2-0dab-4949-93f4-beb4675b4117-config\") pod \"kube-apiserver-operator-575994946d-xcz9f\" (UID: \"e433cbb2-0dab-4949-93f4-beb4675b4117\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xcz9f" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186964 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2eb0798c-5c61-48ac-bf09-2f21642e8e53-tmp-dir\") pod \"dns-default-bhnwz\" (UID: \"2eb0798c-5c61-48ac-bf09-2f21642e8e53\") " pod="openshift-dns/dns-default-bhnwz" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186982 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1ba9072a-064d-4d53-b64b-7315a955f22f-tmpfs\") pod \"olm-operator-5cdf44d969-m22vv\" (UID: \"1ba9072a-064d-4d53-b64b-7315a955f22f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-m22vv" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.186997 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/437bd009-7a16-4598-9da1-57f4ca950147-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-tg5m9\" (UID: \"437bd009-7a16-4598-9da1-57f4ca950147\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tg5m9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187027 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/51fe67ff-4e90-4add-8447-58edc3e3d117-ca-trust-extracted\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187042 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/77083b49-6a76-42e1-9f35-4b34306c23d3-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-75h8s\" (UID: \"77083b49-6a76-42e1-9f35-4b34306c23d3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-75h8s" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187060 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d5b91de-c016-4a44-aab6-910f036d51ae-service-ca-bundle\") pod \"router-default-68cf44c8b8-kr9dh\" (UID: \"3d5b91de-c016-4a44-aab6-910f036d51ae\") " pod="openshift-ingress/router-default-68cf44c8b8-kr9dh" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187076 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9272dafd-6843-41b9-bff8-998f3fd23d33-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-gsksl\" (UID: \"9272dafd-6843-41b9-bff8-998f3fd23d33\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-gsksl" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187105 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e433cbb2-0dab-4949-93f4-beb4675b4117-kube-api-access\") pod \"kube-apiserver-operator-575994946d-xcz9f\" (UID: \"e433cbb2-0dab-4949-93f4-beb4675b4117\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xcz9f" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187124 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5vm6\" (UniqueName: \"kubernetes.io/projected/437bd009-7a16-4598-9da1-57f4ca950147-kube-api-access-h5vm6\") pod \"machine-config-operator-67c9d58cbb-tg5m9\" (UID: \"437bd009-7a16-4598-9da1-57f4ca950147\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tg5m9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187145 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1-mountpoint-dir\") pod \"csi-hostpathplugin-jrtpt\" (UID: \"c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1\") " pod="hostpath-provisioner/csi-hostpathplugin-jrtpt" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187159 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1-csi-data-dir\") pod \"csi-hostpathplugin-jrtpt\" (UID: \"c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1\") " pod="hostpath-provisioner/csi-hostpathplugin-jrtpt" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187173 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0536d1a-6ff6-4063-a0e0-562241238b5b-cert\") pod \"ingress-canary-ghhd8\" (UID: \"c0536d1a-6ff6-4063-a0e0-562241238b5b\") " pod="openshift-ingress-canary/ingress-canary-ghhd8" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187187 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce71fc42-1327-4ce2-8753-feda68799f6c-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-bxl82\" (UID: \"ce71fc42-1327-4ce2-8753-feda68799f6c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bxl82" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187212 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c839200b-2680-46bc-bcfe-30b5dd4e5d03-node-bootstrap-token\") pod \"machine-config-server-2rxc6\" (UID: \"c839200b-2680-46bc-bcfe-30b5dd4e5d03\") " pod="openshift-machine-config-operator/machine-config-server-2rxc6" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187227 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e433cbb2-0dab-4949-93f4-beb4675b4117-serving-cert\") pod \"kube-apiserver-operator-575994946d-xcz9f\" (UID: \"e433cbb2-0dab-4949-93f4-beb4675b4117\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xcz9f" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187244 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/42d215e6-741b-4710-a7e9-b7944f744f0b-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-sbhlq\" (UID: \"42d215e6-741b-4710-a7e9-b7944f744f0b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sbhlq" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187260 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3d5b91de-c016-4a44-aab6-910f036d51ae-stats-auth\") pod \"router-default-68cf44c8b8-kr9dh\" (UID: \"3d5b91de-c016-4a44-aab6-910f036d51ae\") " pod="openshift-ingress/router-default-68cf44c8b8-kr9dh" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187274 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d09591c3-30e3-44ab-88d3-91833456f731-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-rd56f\" (UID: \"d09591c3-30e3-44ab-88d3-91833456f731\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rd56f" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187295 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/51fe67ff-4e90-4add-8447-58edc3e3d117-registry-tls\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187309 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f078a28d-3d9d-41a2-b283-7c1f76ebbfc9-config-volume\") pod \"collect-profiles-29420370-b586h\" (UID: \"f078a28d-3d9d-41a2-b283-7c1f76ebbfc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-b586h" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187327 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdm9x\" (UniqueName: \"kubernetes.io/projected/c0536d1a-6ff6-4063-a0e0-562241238b5b-kube-api-access-gdm9x\") pod \"ingress-canary-ghhd8\" (UID: \"c0536d1a-6ff6-4063-a0e0-562241238b5b\") " pod="openshift-ingress-canary/ingress-canary-ghhd8" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187357 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/81694063-8439-4d15-8673-30e88676f33e-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-t2wgb\" (UID: \"81694063-8439-4d15-8673-30e88676f33e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-t2wgb" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187374 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zfxkm\" (UniqueName: \"kubernetes.io/projected/81694063-8439-4d15-8673-30e88676f33e-kube-api-access-zfxkm\") pod \"openshift-controller-manager-operator-686468bdd5-t2wgb\" (UID: \"81694063-8439-4d15-8673-30e88676f33e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-t2wgb" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187405 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1ba9072a-064d-4d53-b64b-7315a955f22f-profile-collector-cert\") pod \"olm-operator-5cdf44d969-m22vv\" (UID: \"1ba9072a-064d-4d53-b64b-7315a955f22f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-m22vv" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187421 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/24b10400-a42c-4ba4-a4fe-37c3ba5017ed-srv-cert\") pod \"catalog-operator-75ff9f647d-b2vgx\" (UID: \"24b10400-a42c-4ba4-a4fe-37c3ba5017ed\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-b2vgx" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187441 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d6421d4-f996-4c24-88de-d0cd3aee5aec-config\") pod \"etcd-operator-69b85846b6-bfrm9\" (UID: \"4d6421d4-f996-4c24-88de-d0cd3aee5aec\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bfrm9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187457 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wbcng\" (UniqueName: \"kubernetes.io/projected/3d5b91de-c016-4a44-aab6-910f036d51ae-kube-api-access-wbcng\") pod \"router-default-68cf44c8b8-kr9dh\" (UID: \"3d5b91de-c016-4a44-aab6-910f036d51ae\") " pod="openshift-ingress/router-default-68cf44c8b8-kr9dh" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187474 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8h2vx\" (UniqueName: \"kubernetes.io/projected/c46131b3-44f8-4a83-a357-31ca0197d1be-kube-api-access-8h2vx\") pod \"downloads-747b44746d-t8fbs\" (UID: \"c46131b3-44f8-4a83-a357-31ca0197d1be\") " pod="openshift-console/downloads-747b44746d-t8fbs" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187490 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c52381c-f0cb-4cf1-992d-60d930ba7d00-config\") pod \"console-operator-67c89758df-h2kxl\" (UID: \"1c52381c-f0cb-4cf1-992d-60d930ba7d00\") " pod="openshift-console-operator/console-operator-67c89758df-h2kxl" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187508 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/77083b49-6a76-42e1-9f35-4b34306c23d3-tmp\") pod \"marketplace-operator-547dbd544d-75h8s\" (UID: \"77083b49-6a76-42e1-9f35-4b34306c23d3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-75h8s" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187531 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2c83f90-fa38-4d74-a07c-8cb71f20c3eb-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-9z4ll\" (UID: \"c2c83f90-fa38-4d74-a07c-8cb71f20c3eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-9z4ll" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187551 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/51fe67ff-4e90-4add-8447-58edc3e3d117-bound-sa-token\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187566 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e6991c6c-a51b-4cb6-a726-99f42a49e693-signing-key\") pod \"service-ca-74545575db-h6j6c\" (UID: \"e6991c6c-a51b-4cb6-a726-99f42a49e693\") " pod="openshift-service-ca/service-ca-74545575db-h6j6c" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187583 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkls9\" (UniqueName: \"kubernetes.io/projected/7d3d93c9-073e-4463-ad22-0dc846df2d84-kube-api-access-mkls9\") pod \"cni-sysctl-allowlist-ds-x7zl6\" (UID: \"7d3d93c9-073e-4463-ad22-0dc846df2d84\") " pod="openshift-multus/cni-sysctl-allowlist-ds-x7zl6" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187599 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bw68\" (UniqueName: \"kubernetes.io/projected/77083b49-6a76-42e1-9f35-4b34306c23d3-kube-api-access-6bw68\") pod \"marketplace-operator-547dbd544d-75h8s\" (UID: \"77083b49-6a76-42e1-9f35-4b34306c23d3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-75h8s" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187634 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81694063-8439-4d15-8673-30e88676f33e-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-t2wgb\" (UID: \"81694063-8439-4d15-8673-30e88676f33e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-t2wgb" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187652 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgkdt\" (UniqueName: \"kubernetes.io/projected/24b10400-a42c-4ba4-a4fe-37c3ba5017ed-kube-api-access-lgkdt\") pod \"catalog-operator-75ff9f647d-b2vgx\" (UID: \"24b10400-a42c-4ba4-a4fe-37c3ba5017ed\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-b2vgx" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187674 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5lb9\" (UniqueName: \"kubernetes.io/projected/f078a28d-3d9d-41a2-b283-7c1f76ebbfc9-kube-api-access-q5lb9\") pod \"collect-profiles-29420370-b586h\" (UID: \"f078a28d-3d9d-41a2-b283-7c1f76ebbfc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-b586h" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187697 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4d6421d4-f996-4c24-88de-d0cd3aee5aec-tmp-dir\") pod \"etcd-operator-69b85846b6-bfrm9\" (UID: \"4d6421d4-f996-4c24-88de-d0cd3aee5aec\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bfrm9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187713 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztz6q\" (UniqueName: \"kubernetes.io/projected/dbdef7b8-f28d-4bb3-aff4-33b97ff1e415-kube-api-access-ztz6q\") pod \"migrator-866fcbc849-44sgb\" (UID: \"dbdef7b8-f28d-4bb3-aff4-33b97ff1e415\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-44sgb" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187728 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjgmj\" (UniqueName: \"kubernetes.io/projected/2eb0798c-5c61-48ac-bf09-2f21642e8e53-kube-api-access-sjgmj\") pod \"dns-default-bhnwz\" (UID: \"2eb0798c-5c61-48ac-bf09-2f21642e8e53\") " pod="openshift-dns/dns-default-bhnwz" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187743 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ea8b2aeb-7cb2-4d20-920d-4c97bef6a2fd-apiservice-cert\") pod \"packageserver-7d4fc7d867-jskvf\" (UID: \"ea8b2aeb-7cb2-4d20-920d-4c97bef6a2fd\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jskvf" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187762 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8e26b397-18c5-4b0e-a483-943460e35c11-webhook-certs\") pod \"multus-admission-controller-69db94689b-ls2zg\" (UID: \"8e26b397-18c5-4b0e-a483-943460e35c11\") " pod="openshift-multus/multus-admission-controller-69db94689b-ls2zg" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187783 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f078a28d-3d9d-41a2-b283-7c1f76ebbfc9-secret-volume\") pod \"collect-profiles-29420370-b586h\" (UID: \"f078a28d-3d9d-41a2-b283-7c1f76ebbfc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-b586h" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187797 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/437bd009-7a16-4598-9da1-57f4ca950147-images\") pod \"machine-config-operator-67c9d58cbb-tg5m9\" (UID: \"437bd009-7a16-4598-9da1-57f4ca950147\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tg5m9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187827 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/437bd009-7a16-4598-9da1-57f4ca950147-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-tg5m9\" (UID: \"437bd009-7a16-4598-9da1-57f4ca950147\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tg5m9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187843 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce71fc42-1327-4ce2-8753-feda68799f6c-config\") pod \"kube-storage-version-migrator-operator-565b79b866-bxl82\" (UID: \"ce71fc42-1327-4ce2-8753-feda68799f6c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bxl82" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187860 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/24b10400-a42c-4ba4-a4fe-37c3ba5017ed-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-b2vgx\" (UID: \"24b10400-a42c-4ba4-a4fe-37c3ba5017ed\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-b2vgx" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187876 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/839fbd69-7068-47fa-94aa-8af954d8cbc9-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-4jn6q\" (UID: \"839fbd69-7068-47fa-94aa-8af954d8cbc9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4jn6q" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187893 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02227633-e316-43f5-a4f7-9b77a76f30d9-config\") pod \"service-ca-operator-5b9c976747-z44ln\" (UID: \"02227633-e316-43f5-a4f7-9b77a76f30d9\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-z44ln" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.187913 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tn7jw\" (UniqueName: \"kubernetes.io/projected/4d6421d4-f996-4c24-88de-d0cd3aee5aec-kube-api-access-tn7jw\") pod \"etcd-operator-69b85846b6-bfrm9\" (UID: \"4d6421d4-f996-4c24-88de-d0cd3aee5aec\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bfrm9" Dec 08 19:31:00 crc kubenswrapper[5125]: E1208 19:31:00.188103 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:00.68808964 +0000 UTC m=+117.458579914 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.189221 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4d6421d4-f996-4c24-88de-d0cd3aee5aec-tmp-dir\") pod \"etcd-operator-69b85846b6-bfrm9\" (UID: \"4d6421d4-f996-4c24-88de-d0cd3aee5aec\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bfrm9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.189452 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/51fe67ff-4e90-4add-8447-58edc3e3d117-ca-trust-extracted\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.189653 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81694063-8439-4d15-8673-30e88676f33e-config\") pod \"openshift-controller-manager-operator-686468bdd5-t2wgb\" (UID: \"81694063-8439-4d15-8673-30e88676f33e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-t2wgb" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.189922 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51fe67ff-4e90-4add-8447-58edc3e3d117-trusted-ca\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.190042 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d5b91de-c016-4a44-aab6-910f036d51ae-service-ca-bundle\") pod \"router-default-68cf44c8b8-kr9dh\" (UID: \"3d5b91de-c016-4a44-aab6-910f036d51ae\") " pod="openshift-ingress/router-default-68cf44c8b8-kr9dh" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.191089 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/4d6421d4-f996-4c24-88de-d0cd3aee5aec-etcd-ca\") pod \"etcd-operator-69b85846b6-bfrm9\" (UID: \"4d6421d4-f996-4c24-88de-d0cd3aee5aec\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bfrm9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.191160 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/dd5a81f9-3ca2-4c34-9160-5db0dd237f3c-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-8xhhx\" (UID: \"dd5a81f9-3ca2-4c34-9160-5db0dd237f3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8xhhx" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.191902 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c2c83f90-fa38-4d74-a07c-8cb71f20c3eb-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-9z4ll\" (UID: \"c2c83f90-fa38-4d74-a07c-8cb71f20c3eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-9z4ll" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.192252 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd5a81f9-3ca2-4c34-9160-5db0dd237f3c-config\") pod \"kube-controller-manager-operator-69d5f845f8-8xhhx\" (UID: \"dd5a81f9-3ca2-4c34-9160-5db0dd237f3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8xhhx" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.192687 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/4d6421d4-f996-4c24-88de-d0cd3aee5aec-etcd-service-ca\") pod \"etcd-operator-69b85846b6-bfrm9\" (UID: \"4d6421d4-f996-4c24-88de-d0cd3aee5aec\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bfrm9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.193105 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/51fe67ff-4e90-4add-8447-58edc3e3d117-registry-certificates\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.195596 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4d6421d4-f996-4c24-88de-d0cd3aee5aec-serving-cert\") pod \"etcd-operator-69b85846b6-bfrm9\" (UID: \"4d6421d4-f996-4c24-88de-d0cd3aee5aec\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bfrm9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.195940 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/81694063-8439-4d15-8673-30e88676f33e-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-t2wgb\" (UID: \"81694063-8439-4d15-8673-30e88676f33e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-t2wgb" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.196182 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3d5b91de-c016-4a44-aab6-910f036d51ae-metrics-certs\") pod \"router-default-68cf44c8b8-kr9dh\" (UID: \"3d5b91de-c016-4a44-aab6-910f036d51ae\") " pod="openshift-ingress/router-default-68cf44c8b8-kr9dh" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.196769 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d6421d4-f996-4c24-88de-d0cd3aee5aec-config\") pod \"etcd-operator-69b85846b6-bfrm9\" (UID: \"4d6421d4-f996-4c24-88de-d0cd3aee5aec\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bfrm9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.197893 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/51fe67ff-4e90-4add-8447-58edc3e3d117-registry-tls\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.198859 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3d5b91de-c016-4a44-aab6-910f036d51ae-stats-auth\") pod \"router-default-68cf44c8b8-kr9dh\" (UID: \"3d5b91de-c016-4a44-aab6-910f036d51ae\") " pod="openshift-ingress/router-default-68cf44c8b8-kr9dh" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.199572 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2c83f90-fa38-4d74-a07c-8cb71f20c3eb-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-9z4ll\" (UID: \"c2c83f90-fa38-4d74-a07c-8cb71f20c3eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-9z4ll" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.199915 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81694063-8439-4d15-8673-30e88676f33e-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-t2wgb\" (UID: \"81694063-8439-4d15-8673-30e88676f33e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-t2wgb" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.200023 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/42d215e6-741b-4710-a7e9-b7944f744f0b-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-sbhlq\" (UID: \"42d215e6-741b-4710-a7e9-b7944f744f0b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sbhlq" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.201078 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd5a81f9-3ca2-4c34-9160-5db0dd237f3c-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-8xhhx\" (UID: \"dd5a81f9-3ca2-4c34-9160-5db0dd237f3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8xhhx" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.208137 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4d6421d4-f996-4c24-88de-d0cd3aee5aec-etcd-client\") pod \"etcd-operator-69b85846b6-bfrm9\" (UID: \"4d6421d4-f996-4c24-88de-d0cd3aee5aec\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bfrm9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.212540 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3d5b91de-c016-4a44-aab6-910f036d51ae-default-certificate\") pod \"router-default-68cf44c8b8-kr9dh\" (UID: \"3d5b91de-c016-4a44-aab6-910f036d51ae\") " pod="openshift-ingress/router-default-68cf44c8b8-kr9dh" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.221075 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/51fe67ff-4e90-4add-8447-58edc3e3d117-installation-pull-secrets\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.237483 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn7jw\" (UniqueName: \"kubernetes.io/projected/4d6421d4-f996-4c24-88de-d0cd3aee5aec-kube-api-access-tn7jw\") pod \"etcd-operator-69b85846b6-bfrm9\" (UID: \"4d6421d4-f996-4c24-88de-d0cd3aee5aec\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-bfrm9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.254372 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfxkm\" (UniqueName: \"kubernetes.io/projected/81694063-8439-4d15-8673-30e88676f33e-kube-api-access-zfxkm\") pod \"openshift-controller-manager-operator-686468bdd5-t2wgb\" (UID: \"81694063-8439-4d15-8673-30e88676f33e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-t2wgb" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.272931 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd5a81f9-3ca2-4c34-9160-5db0dd237f3c-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-8xhhx\" (UID: \"dd5a81f9-3ca2-4c34-9160-5db0dd237f3c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8xhhx" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.289720 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z7hc5\" (UniqueName: \"kubernetes.io/projected/02227633-e316-43f5-a4f7-9b77a76f30d9-kube-api-access-z7hc5\") pod \"service-ca-operator-5b9c976747-z44ln\" (UID: \"02227633-e316-43f5-a4f7-9b77a76f30d9\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-z44ln" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.289768 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1-socket-dir\") pod \"csi-hostpathplugin-jrtpt\" (UID: \"c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1\") " pod="hostpath-provisioner/csi-hostpathplugin-jrtpt" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.289784 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7d3d93c9-073e-4463-ad22-0dc846df2d84-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-x7zl6\" (UID: \"7d3d93c9-073e-4463-ad22-0dc846df2d84\") " pod="openshift-multus/cni-sysctl-allowlist-ds-x7zl6" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.289805 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c52381c-f0cb-4cf1-992d-60d930ba7d00-serving-cert\") pod \"console-operator-67c89758df-h2kxl\" (UID: \"1c52381c-f0cb-4cf1-992d-60d930ba7d00\") " pod="openshift-console-operator/console-operator-67c89758df-h2kxl" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.289820 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2eb0798c-5c61-48ac-bf09-2f21642e8e53-config-volume\") pod \"dns-default-bhnwz\" (UID: \"2eb0798c-5c61-48ac-bf09-2f21642e8e53\") " pod="openshift-dns/dns-default-bhnwz" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.289833 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7d3d93c9-073e-4463-ad22-0dc846df2d84-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-x7zl6\" (UID: \"7d3d93c9-073e-4463-ad22-0dc846df2d84\") " pod="openshift-multus/cni-sysctl-allowlist-ds-x7zl6" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.289853 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qkjnw\" (UniqueName: \"kubernetes.io/projected/601959f9-7e12-4ca5-9856-f962f3929720-kube-api-access-qkjnw\") pod \"control-plane-machine-set-operator-75ffdb6fcd-nblg9\" (UID: \"601959f9-7e12-4ca5-9856-f962f3929720\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nblg9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.289868 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-75zwt\" (UniqueName: \"kubernetes.io/projected/1c52381c-f0cb-4cf1-992d-60d930ba7d00-kube-api-access-75zwt\") pod \"console-operator-67c89758df-h2kxl\" (UID: \"1c52381c-f0cb-4cf1-992d-60d930ba7d00\") " pod="openshift-console-operator/console-operator-67c89758df-h2kxl" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.289883 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d09591c3-30e3-44ab-88d3-91833456f731-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-rd56f\" (UID: \"d09591c3-30e3-44ab-88d3-91833456f731\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rd56f" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.289898 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/77083b49-6a76-42e1-9f35-4b34306c23d3-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-75h8s\" (UID: \"77083b49-6a76-42e1-9f35-4b34306c23d3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-75h8s" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.289915 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1-registration-dir\") pod \"csi-hostpathplugin-jrtpt\" (UID: \"c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1\") " pod="hostpath-provisioner/csi-hostpathplugin-jrtpt" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.289958 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/24b10400-a42c-4ba4-a4fe-37c3ba5017ed-tmpfs\") pod \"catalog-operator-75ff9f647d-b2vgx\" (UID: \"24b10400-a42c-4ba4-a4fe-37c3ba5017ed\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-b2vgx" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.289995 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lxbf6\" (UniqueName: \"kubernetes.io/projected/8e26b397-18c5-4b0e-a483-943460e35c11-kube-api-access-lxbf6\") pod \"multus-admission-controller-69db94689b-ls2zg\" (UID: \"8e26b397-18c5-4b0e-a483-943460e35c11\") " pod="openshift-multus/multus-admission-controller-69db94689b-ls2zg" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290010 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hz9vq\" (UniqueName: \"kubernetes.io/projected/9272dafd-6843-41b9-bff8-998f3fd23d33-kube-api-access-hz9vq\") pod \"machine-config-controller-f9cdd68f7-gsksl\" (UID: \"9272dafd-6843-41b9-bff8-998f3fd23d33\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-gsksl" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290028 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d09591c3-30e3-44ab-88d3-91833456f731-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-rd56f\" (UID: \"d09591c3-30e3-44ab-88d3-91833456f731\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rd56f" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290052 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c839200b-2680-46bc-bcfe-30b5dd4e5d03-certs\") pod \"machine-config-server-2rxc6\" (UID: \"c839200b-2680-46bc-bcfe-30b5dd4e5d03\") " pod="openshift-machine-config-operator/machine-config-server-2rxc6" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290071 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g9dgl\" (UniqueName: \"kubernetes.io/projected/ea8b2aeb-7cb2-4d20-920d-4c97bef6a2fd-kube-api-access-g9dgl\") pod \"packageserver-7d4fc7d867-jskvf\" (UID: \"ea8b2aeb-7cb2-4d20-920d-4c97bef6a2fd\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jskvf" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290111 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/7d3d93c9-073e-4463-ad22-0dc846df2d84-ready\") pod \"cni-sysctl-allowlist-ds-x7zl6\" (UID: \"7d3d93c9-073e-4463-ad22-0dc846df2d84\") " pod="openshift-multus/cni-sysctl-allowlist-ds-x7zl6" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290128 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e433cbb2-0dab-4949-93f4-beb4675b4117-config\") pod \"kube-apiserver-operator-575994946d-xcz9f\" (UID: \"e433cbb2-0dab-4949-93f4-beb4675b4117\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xcz9f" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290181 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2eb0798c-5c61-48ac-bf09-2f21642e8e53-tmp-dir\") pod \"dns-default-bhnwz\" (UID: \"2eb0798c-5c61-48ac-bf09-2f21642e8e53\") " pod="openshift-dns/dns-default-bhnwz" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290202 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1ba9072a-064d-4d53-b64b-7315a955f22f-tmpfs\") pod \"olm-operator-5cdf44d969-m22vv\" (UID: \"1ba9072a-064d-4d53-b64b-7315a955f22f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-m22vv" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290218 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/437bd009-7a16-4598-9da1-57f4ca950147-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-tg5m9\" (UID: \"437bd009-7a16-4598-9da1-57f4ca950147\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tg5m9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290239 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/77083b49-6a76-42e1-9f35-4b34306c23d3-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-75h8s\" (UID: \"77083b49-6a76-42e1-9f35-4b34306c23d3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-75h8s" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290256 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9272dafd-6843-41b9-bff8-998f3fd23d33-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-gsksl\" (UID: \"9272dafd-6843-41b9-bff8-998f3fd23d33\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-gsksl" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290273 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e433cbb2-0dab-4949-93f4-beb4675b4117-kube-api-access\") pod \"kube-apiserver-operator-575994946d-xcz9f\" (UID: \"e433cbb2-0dab-4949-93f4-beb4675b4117\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xcz9f" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290289 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h5vm6\" (UniqueName: \"kubernetes.io/projected/437bd009-7a16-4598-9da1-57f4ca950147-kube-api-access-h5vm6\") pod \"machine-config-operator-67c9d58cbb-tg5m9\" (UID: \"437bd009-7a16-4598-9da1-57f4ca950147\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tg5m9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290307 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1-mountpoint-dir\") pod \"csi-hostpathplugin-jrtpt\" (UID: \"c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1\") " pod="hostpath-provisioner/csi-hostpathplugin-jrtpt" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290322 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1-csi-data-dir\") pod \"csi-hostpathplugin-jrtpt\" (UID: \"c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1\") " pod="hostpath-provisioner/csi-hostpathplugin-jrtpt" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290335 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0536d1a-6ff6-4063-a0e0-562241238b5b-cert\") pod \"ingress-canary-ghhd8\" (UID: \"c0536d1a-6ff6-4063-a0e0-562241238b5b\") " pod="openshift-ingress-canary/ingress-canary-ghhd8" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290349 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce71fc42-1327-4ce2-8753-feda68799f6c-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-bxl82\" (UID: \"ce71fc42-1327-4ce2-8753-feda68799f6c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bxl82" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290368 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c839200b-2680-46bc-bcfe-30b5dd4e5d03-node-bootstrap-token\") pod \"machine-config-server-2rxc6\" (UID: \"c839200b-2680-46bc-bcfe-30b5dd4e5d03\") " pod="openshift-machine-config-operator/machine-config-server-2rxc6" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290383 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e433cbb2-0dab-4949-93f4-beb4675b4117-serving-cert\") pod \"kube-apiserver-operator-575994946d-xcz9f\" (UID: \"e433cbb2-0dab-4949-93f4-beb4675b4117\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xcz9f" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290400 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d09591c3-30e3-44ab-88d3-91833456f731-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-rd56f\" (UID: \"d09591c3-30e3-44ab-88d3-91833456f731\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rd56f" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290416 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f078a28d-3d9d-41a2-b283-7c1f76ebbfc9-config-volume\") pod \"collect-profiles-29420370-b586h\" (UID: \"f078a28d-3d9d-41a2-b283-7c1f76ebbfc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-b586h" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290432 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gdm9x\" (UniqueName: \"kubernetes.io/projected/c0536d1a-6ff6-4063-a0e0-562241238b5b-kube-api-access-gdm9x\") pod \"ingress-canary-ghhd8\" (UID: \"c0536d1a-6ff6-4063-a0e0-562241238b5b\") " pod="openshift-ingress-canary/ingress-canary-ghhd8" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290454 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1ba9072a-064d-4d53-b64b-7315a955f22f-profile-collector-cert\") pod \"olm-operator-5cdf44d969-m22vv\" (UID: \"1ba9072a-064d-4d53-b64b-7315a955f22f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-m22vv" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290468 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/24b10400-a42c-4ba4-a4fe-37c3ba5017ed-srv-cert\") pod \"catalog-operator-75ff9f647d-b2vgx\" (UID: \"24b10400-a42c-4ba4-a4fe-37c3ba5017ed\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-b2vgx" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290486 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c52381c-f0cb-4cf1-992d-60d930ba7d00-config\") pod \"console-operator-67c89758df-h2kxl\" (UID: \"1c52381c-f0cb-4cf1-992d-60d930ba7d00\") " pod="openshift-console-operator/console-operator-67c89758df-h2kxl" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290503 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/77083b49-6a76-42e1-9f35-4b34306c23d3-tmp\") pod \"marketplace-operator-547dbd544d-75h8s\" (UID: \"77083b49-6a76-42e1-9f35-4b34306c23d3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-75h8s" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290523 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e6991c6c-a51b-4cb6-a726-99f42a49e693-signing-key\") pod \"service-ca-74545575db-h6j6c\" (UID: \"e6991c6c-a51b-4cb6-a726-99f42a49e693\") " pod="openshift-service-ca/service-ca-74545575db-h6j6c" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290540 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mkls9\" (UniqueName: \"kubernetes.io/projected/7d3d93c9-073e-4463-ad22-0dc846df2d84-kube-api-access-mkls9\") pod \"cni-sysctl-allowlist-ds-x7zl6\" (UID: \"7d3d93c9-073e-4463-ad22-0dc846df2d84\") " pod="openshift-multus/cni-sysctl-allowlist-ds-x7zl6" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290555 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6bw68\" (UniqueName: \"kubernetes.io/projected/77083b49-6a76-42e1-9f35-4b34306c23d3-kube-api-access-6bw68\") pod \"marketplace-operator-547dbd544d-75h8s\" (UID: \"77083b49-6a76-42e1-9f35-4b34306c23d3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-75h8s" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290576 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290592 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lgkdt\" (UniqueName: \"kubernetes.io/projected/24b10400-a42c-4ba4-a4fe-37c3ba5017ed-kube-api-access-lgkdt\") pod \"catalog-operator-75ff9f647d-b2vgx\" (UID: \"24b10400-a42c-4ba4-a4fe-37c3ba5017ed\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-b2vgx" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290640 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q5lb9\" (UniqueName: \"kubernetes.io/projected/f078a28d-3d9d-41a2-b283-7c1f76ebbfc9-kube-api-access-q5lb9\") pod \"collect-profiles-29420370-b586h\" (UID: \"f078a28d-3d9d-41a2-b283-7c1f76ebbfc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-b586h" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290664 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ztz6q\" (UniqueName: \"kubernetes.io/projected/dbdef7b8-f28d-4bb3-aff4-33b97ff1e415-kube-api-access-ztz6q\") pod \"migrator-866fcbc849-44sgb\" (UID: \"dbdef7b8-f28d-4bb3-aff4-33b97ff1e415\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-44sgb" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290680 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sjgmj\" (UniqueName: \"kubernetes.io/projected/2eb0798c-5c61-48ac-bf09-2f21642e8e53-kube-api-access-sjgmj\") pod \"dns-default-bhnwz\" (UID: \"2eb0798c-5c61-48ac-bf09-2f21642e8e53\") " pod="openshift-dns/dns-default-bhnwz" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290694 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ea8b2aeb-7cb2-4d20-920d-4c97bef6a2fd-apiservice-cert\") pod \"packageserver-7d4fc7d867-jskvf\" (UID: \"ea8b2aeb-7cb2-4d20-920d-4c97bef6a2fd\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jskvf" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290711 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8e26b397-18c5-4b0e-a483-943460e35c11-webhook-certs\") pod \"multus-admission-controller-69db94689b-ls2zg\" (UID: \"8e26b397-18c5-4b0e-a483-943460e35c11\") " pod="openshift-multus/multus-admission-controller-69db94689b-ls2zg" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290728 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f078a28d-3d9d-41a2-b283-7c1f76ebbfc9-secret-volume\") pod \"collect-profiles-29420370-b586h\" (UID: \"f078a28d-3d9d-41a2-b283-7c1f76ebbfc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-b586h" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290743 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/437bd009-7a16-4598-9da1-57f4ca950147-images\") pod \"machine-config-operator-67c9d58cbb-tg5m9\" (UID: \"437bd009-7a16-4598-9da1-57f4ca950147\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tg5m9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290762 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/437bd009-7a16-4598-9da1-57f4ca950147-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-tg5m9\" (UID: \"437bd009-7a16-4598-9da1-57f4ca950147\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tg5m9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290777 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce71fc42-1327-4ce2-8753-feda68799f6c-config\") pod \"kube-storage-version-migrator-operator-565b79b866-bxl82\" (UID: \"ce71fc42-1327-4ce2-8753-feda68799f6c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bxl82" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290792 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/24b10400-a42c-4ba4-a4fe-37c3ba5017ed-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-b2vgx\" (UID: \"24b10400-a42c-4ba4-a4fe-37c3ba5017ed\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-b2vgx" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290809 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/839fbd69-7068-47fa-94aa-8af954d8cbc9-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-4jn6q\" (UID: \"839fbd69-7068-47fa-94aa-8af954d8cbc9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4jn6q" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290825 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02227633-e316-43f5-a4f7-9b77a76f30d9-config\") pod \"service-ca-operator-5b9c976747-z44ln\" (UID: \"02227633-e316-43f5-a4f7-9b77a76f30d9\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-z44ln" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290851 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/601959f9-7e12-4ca5-9856-f962f3929720-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-nblg9\" (UID: \"601959f9-7e12-4ca5-9856-f962f3929720\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nblg9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290871 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1-plugins-dir\") pod \"csi-hostpathplugin-jrtpt\" (UID: \"c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1\") " pod="hostpath-provisioner/csi-hostpathplugin-jrtpt" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290902 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nsb4h\" (UniqueName: \"kubernetes.io/projected/ce71fc42-1327-4ce2-8753-feda68799f6c-kube-api-access-nsb4h\") pod \"kube-storage-version-migrator-operator-565b79b866-bxl82\" (UID: \"ce71fc42-1327-4ce2-8753-feda68799f6c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bxl82" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290923 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c52381c-f0cb-4cf1-992d-60d930ba7d00-trusted-ca\") pod \"console-operator-67c89758df-h2kxl\" (UID: \"1c52381c-f0cb-4cf1-992d-60d930ba7d00\") " pod="openshift-console-operator/console-operator-67c89758df-h2kxl" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290937 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mm78l\" (UniqueName: \"kubernetes.io/projected/1ba9072a-064d-4d53-b64b-7315a955f22f-kube-api-access-mm78l\") pod \"olm-operator-5cdf44d969-m22vv\" (UID: \"1ba9072a-064d-4d53-b64b-7315a955f22f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-m22vv" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290956 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ea8b2aeb-7cb2-4d20-920d-4c97bef6a2fd-webhook-cert\") pod \"packageserver-7d4fc7d867-jskvf\" (UID: \"ea8b2aeb-7cb2-4d20-920d-4c97bef6a2fd\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jskvf" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290971 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x7vmc\" (UniqueName: \"kubernetes.io/projected/839fbd69-7068-47fa-94aa-8af954d8cbc9-kube-api-access-x7vmc\") pod \"package-server-manager-77f986bd66-4jn6q\" (UID: \"839fbd69-7068-47fa-94aa-8af954d8cbc9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4jn6q" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.290986 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02227633-e316-43f5-a4f7-9b77a76f30d9-serving-cert\") pod \"service-ca-operator-5b9c976747-z44ln\" (UID: \"02227633-e316-43f5-a4f7-9b77a76f30d9\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-z44ln" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.291020 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fhdvt\" (UniqueName: \"kubernetes.io/projected/c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1-kube-api-access-fhdvt\") pod \"csi-hostpathplugin-jrtpt\" (UID: \"c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1\") " pod="hostpath-provisioner/csi-hostpathplugin-jrtpt" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.291035 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e6991c6c-a51b-4cb6-a726-99f42a49e693-signing-cabundle\") pod \"service-ca-74545575db-h6j6c\" (UID: \"e6991c6c-a51b-4cb6-a726-99f42a49e693\") " pod="openshift-service-ca/service-ca-74545575db-h6j6c" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.291052 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s6h6l\" (UniqueName: \"kubernetes.io/projected/c839200b-2680-46bc-bcfe-30b5dd4e5d03-kube-api-access-s6h6l\") pod \"machine-config-server-2rxc6\" (UID: \"c839200b-2680-46bc-bcfe-30b5dd4e5d03\") " pod="openshift-machine-config-operator/machine-config-server-2rxc6" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.291066 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2eb0798c-5c61-48ac-bf09-2f21642e8e53-metrics-tls\") pod \"dns-default-bhnwz\" (UID: \"2eb0798c-5c61-48ac-bf09-2f21642e8e53\") " pod="openshift-dns/dns-default-bhnwz" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.291082 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1ba9072a-064d-4d53-b64b-7315a955f22f-srv-cert\") pod \"olm-operator-5cdf44d969-m22vv\" (UID: \"1ba9072a-064d-4d53-b64b-7315a955f22f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-m22vv" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.291097 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e433cbb2-0dab-4949-93f4-beb4675b4117-tmp-dir\") pod \"kube-apiserver-operator-575994946d-xcz9f\" (UID: \"e433cbb2-0dab-4949-93f4-beb4675b4117\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xcz9f" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.291142 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9272dafd-6843-41b9-bff8-998f3fd23d33-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-gsksl\" (UID: \"9272dafd-6843-41b9-bff8-998f3fd23d33\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-gsksl" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.291168 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zfb8w\" (UniqueName: \"kubernetes.io/projected/e6991c6c-a51b-4cb6-a726-99f42a49e693-kube-api-access-zfb8w\") pod \"service-ca-74545575db-h6j6c\" (UID: \"e6991c6c-a51b-4cb6-a726-99f42a49e693\") " pod="openshift-service-ca/service-ca-74545575db-h6j6c" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.291214 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-whx82\" (UniqueName: \"kubernetes.io/projected/d09591c3-30e3-44ab-88d3-91833456f731-kube-api-access-whx82\") pod \"ingress-operator-6b9cb4dbcf-rd56f\" (UID: \"d09591c3-30e3-44ab-88d3-91833456f731\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rd56f" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.291266 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ea8b2aeb-7cb2-4d20-920d-4c97bef6a2fd-tmpfs\") pod \"packageserver-7d4fc7d867-jskvf\" (UID: \"ea8b2aeb-7cb2-4d20-920d-4c97bef6a2fd\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jskvf" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.291727 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ea8b2aeb-7cb2-4d20-920d-4c97bef6a2fd-tmpfs\") pod \"packageserver-7d4fc7d867-jskvf\" (UID: \"ea8b2aeb-7cb2-4d20-920d-4c97bef6a2fd\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jskvf" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.292290 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1-registration-dir\") pod \"csi-hostpathplugin-jrtpt\" (UID: \"c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1\") " pod="hostpath-provisioner/csi-hostpathplugin-jrtpt" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.292408 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7d3d93c9-073e-4463-ad22-0dc846df2d84-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-x7zl6\" (UID: \"7d3d93c9-073e-4463-ad22-0dc846df2d84\") " pod="openshift-multus/cni-sysctl-allowlist-ds-x7zl6" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.292733 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2eb0798c-5c61-48ac-bf09-2f21642e8e53-tmp-dir\") pod \"dns-default-bhnwz\" (UID: \"2eb0798c-5c61-48ac-bf09-2f21642e8e53\") " pod="openshift-dns/dns-default-bhnwz" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.293038 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02227633-e316-43f5-a4f7-9b77a76f30d9-config\") pod \"service-ca-operator-5b9c976747-z44ln\" (UID: \"02227633-e316-43f5-a4f7-9b77a76f30d9\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-z44ln" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.293272 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c52381c-f0cb-4cf1-992d-60d930ba7d00-config\") pod \"console-operator-67c89758df-h2kxl\" (UID: \"1c52381c-f0cb-4cf1-992d-60d930ba7d00\") " pod="openshift-console-operator/console-operator-67c89758df-h2kxl" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.293412 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/77083b49-6a76-42e1-9f35-4b34306c23d3-tmp\") pod \"marketplace-operator-547dbd544d-75h8s\" (UID: \"77083b49-6a76-42e1-9f35-4b34306c23d3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-75h8s" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.293447 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/24b10400-a42c-4ba4-a4fe-37c3ba5017ed-tmpfs\") pod \"catalog-operator-75ff9f647d-b2vgx\" (UID: \"24b10400-a42c-4ba4-a4fe-37c3ba5017ed\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-b2vgx" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.294002 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/437bd009-7a16-4598-9da1-57f4ca950147-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-tg5m9\" (UID: \"437bd009-7a16-4598-9da1-57f4ca950147\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tg5m9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.294107 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f078a28d-3d9d-41a2-b283-7c1f76ebbfc9-config-volume\") pod \"collect-profiles-29420370-b586h\" (UID: \"f078a28d-3d9d-41a2-b283-7c1f76ebbfc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-b586h" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.294205 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9272dafd-6843-41b9-bff8-998f3fd23d33-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-gsksl\" (UID: \"9272dafd-6843-41b9-bff8-998f3fd23d33\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-gsksl" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.294391 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1-mountpoint-dir\") pod \"csi-hostpathplugin-jrtpt\" (UID: \"c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1\") " pod="hostpath-provisioner/csi-hostpathplugin-jrtpt" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.294477 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1-csi-data-dir\") pod \"csi-hostpathplugin-jrtpt\" (UID: \"c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1\") " pod="hostpath-provisioner/csi-hostpathplugin-jrtpt" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.294860 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/7d3d93c9-073e-4463-ad22-0dc846df2d84-ready\") pod \"cni-sysctl-allowlist-ds-x7zl6\" (UID: \"7d3d93c9-073e-4463-ad22-0dc846df2d84\") " pod="openshift-multus/cni-sysctl-allowlist-ds-x7zl6" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.295046 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e433cbb2-0dab-4949-93f4-beb4675b4117-config\") pod \"kube-apiserver-operator-575994946d-xcz9f\" (UID: \"e433cbb2-0dab-4949-93f4-beb4675b4117\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xcz9f" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.295660 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1-socket-dir\") pod \"csi-hostpathplugin-jrtpt\" (UID: \"c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1\") " pod="hostpath-provisioner/csi-hostpathplugin-jrtpt" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.296300 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c52381c-f0cb-4cf1-992d-60d930ba7d00-serving-cert\") pod \"console-operator-67c89758df-h2kxl\" (UID: \"1c52381c-f0cb-4cf1-992d-60d930ba7d00\") " pod="openshift-console-operator/console-operator-67c89758df-h2kxl" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.296398 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2eb0798c-5c61-48ac-bf09-2f21642e8e53-config-volume\") pod \"dns-default-bhnwz\" (UID: \"2eb0798c-5c61-48ac-bf09-2f21642e8e53\") " pod="openshift-dns/dns-default-bhnwz" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.296873 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7d3d93c9-073e-4463-ad22-0dc846df2d84-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-x7zl6\" (UID: \"7d3d93c9-073e-4463-ad22-0dc846df2d84\") " pod="openshift-multus/cni-sysctl-allowlist-ds-x7zl6" Dec 08 19:31:00 crc kubenswrapper[5125]: E1208 19:31:00.297113 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:00.797097093 +0000 UTC m=+117.567587367 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.297300 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/24b10400-a42c-4ba4-a4fe-37c3ba5017ed-srv-cert\") pod \"catalog-operator-75ff9f647d-b2vgx\" (UID: \"24b10400-a42c-4ba4-a4fe-37c3ba5017ed\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-b2vgx" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.297433 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d09591c3-30e3-44ab-88d3-91833456f731-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-rd56f\" (UID: \"d09591c3-30e3-44ab-88d3-91833456f731\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rd56f" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.298669 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1-plugins-dir\") pod \"csi-hostpathplugin-jrtpt\" (UID: \"c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1\") " pod="hostpath-provisioner/csi-hostpathplugin-jrtpt" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.299121 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/77083b49-6a76-42e1-9f35-4b34306c23d3-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-75h8s\" (UID: \"77083b49-6a76-42e1-9f35-4b34306c23d3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-75h8s" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.299895 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1c52381c-f0cb-4cf1-992d-60d930ba7d00-trusted-ca\") pod \"console-operator-67c89758df-h2kxl\" (UID: \"1c52381c-f0cb-4cf1-992d-60d930ba7d00\") " pod="openshift-console-operator/console-operator-67c89758df-h2kxl" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.300005 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d09591c3-30e3-44ab-88d3-91833456f731-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-rd56f\" (UID: \"d09591c3-30e3-44ab-88d3-91833456f731\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rd56f" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.301019 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c839200b-2680-46bc-bcfe-30b5dd4e5d03-certs\") pod \"machine-config-server-2rxc6\" (UID: \"c839200b-2680-46bc-bcfe-30b5dd4e5d03\") " pod="openshift-machine-config-operator/machine-config-server-2rxc6" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.303100 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/77083b49-6a76-42e1-9f35-4b34306c23d3-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-75h8s\" (UID: \"77083b49-6a76-42e1-9f35-4b34306c23d3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-75h8s" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.303543 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ea8b2aeb-7cb2-4d20-920d-4c97bef6a2fd-apiservice-cert\") pod \"packageserver-7d4fc7d867-jskvf\" (UID: \"ea8b2aeb-7cb2-4d20-920d-4c97bef6a2fd\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jskvf" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.303810 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1ba9072a-064d-4d53-b64b-7315a955f22f-profile-collector-cert\") pod \"olm-operator-5cdf44d969-m22vv\" (UID: \"1ba9072a-064d-4d53-b64b-7315a955f22f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-m22vv" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.304206 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce71fc42-1327-4ce2-8753-feda68799f6c-config\") pod \"kube-storage-version-migrator-operator-565b79b866-bxl82\" (UID: \"ce71fc42-1327-4ce2-8753-feda68799f6c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bxl82" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.305886 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/437bd009-7a16-4598-9da1-57f4ca950147-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-tg5m9\" (UID: \"437bd009-7a16-4598-9da1-57f4ca950147\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tg5m9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.306584 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e6991c6c-a51b-4cb6-a726-99f42a49e693-signing-key\") pod \"service-ca-74545575db-h6j6c\" (UID: \"e6991c6c-a51b-4cb6-a726-99f42a49e693\") " pod="openshift-service-ca/service-ca-74545575db-h6j6c" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.307273 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fwnk\" (UniqueName: \"kubernetes.io/projected/51fe67ff-4e90-4add-8447-58edc3e3d117-kube-api-access-8fwnk\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.307384 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/437bd009-7a16-4598-9da1-57f4ca950147-images\") pod \"machine-config-operator-67c9d58cbb-tg5m9\" (UID: \"437bd009-7a16-4598-9da1-57f4ca950147\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tg5m9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.307715 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/e433cbb2-0dab-4949-93f4-beb4675b4117-tmp-dir\") pod \"kube-apiserver-operator-575994946d-xcz9f\" (UID: \"e433cbb2-0dab-4949-93f4-beb4675b4117\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xcz9f" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.307741 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e6991c6c-a51b-4cb6-a726-99f42a49e693-signing-cabundle\") pod \"service-ca-74545575db-h6j6c\" (UID: \"e6991c6c-a51b-4cb6-a726-99f42a49e693\") " pod="openshift-service-ca/service-ca-74545575db-h6j6c" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.307827 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e433cbb2-0dab-4949-93f4-beb4675b4117-serving-cert\") pod \"kube-apiserver-operator-575994946d-xcz9f\" (UID: \"e433cbb2-0dab-4949-93f4-beb4675b4117\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xcz9f" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.308719 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8e26b397-18c5-4b0e-a483-943460e35c11-webhook-certs\") pod \"multus-admission-controller-69db94689b-ls2zg\" (UID: \"8e26b397-18c5-4b0e-a483-943460e35c11\") " pod="openshift-multus/multus-admission-controller-69db94689b-ls2zg" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.309304 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/24b10400-a42c-4ba4-a4fe-37c3ba5017ed-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-b2vgx\" (UID: \"24b10400-a42c-4ba4-a4fe-37c3ba5017ed\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-b2vgx" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.309682 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1ba9072a-064d-4d53-b64b-7315a955f22f-srv-cert\") pod \"olm-operator-5cdf44d969-m22vv\" (UID: \"1ba9072a-064d-4d53-b64b-7315a955f22f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-m22vv" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.309897 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f078a28d-3d9d-41a2-b283-7c1f76ebbfc9-secret-volume\") pod \"collect-profiles-29420370-b586h\" (UID: \"f078a28d-3d9d-41a2-b283-7c1f76ebbfc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-b586h" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.310468 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/601959f9-7e12-4ca5-9856-f962f3929720-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-nblg9\" (UID: \"601959f9-7e12-4ca5-9856-f962f3929720\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nblg9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.311070 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0536d1a-6ff6-4063-a0e0-562241238b5b-cert\") pod \"ingress-canary-ghhd8\" (UID: \"c0536d1a-6ff6-4063-a0e0-562241238b5b\") " pod="openshift-ingress-canary/ingress-canary-ghhd8" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.311355 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ea8b2aeb-7cb2-4d20-920d-4c97bef6a2fd-webhook-cert\") pod \"packageserver-7d4fc7d867-jskvf\" (UID: \"ea8b2aeb-7cb2-4d20-920d-4c97bef6a2fd\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jskvf" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.312034 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2eb0798c-5c61-48ac-bf09-2f21642e8e53-metrics-tls\") pod \"dns-default-bhnwz\" (UID: \"2eb0798c-5c61-48ac-bf09-2f21642e8e53\") " pod="openshift-dns/dns-default-bhnwz" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.312194 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c839200b-2680-46bc-bcfe-30b5dd4e5d03-node-bootstrap-token\") pod \"machine-config-server-2rxc6\" (UID: \"c839200b-2680-46bc-bcfe-30b5dd4e5d03\") " pod="openshift-machine-config-operator/machine-config-server-2rxc6" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.312773 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/839fbd69-7068-47fa-94aa-8af954d8cbc9-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-4jn6q\" (UID: \"839fbd69-7068-47fa-94aa-8af954d8cbc9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4jn6q" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.312974 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02227633-e316-43f5-a4f7-9b77a76f30d9-serving-cert\") pod \"service-ca-operator-5b9c976747-z44ln\" (UID: \"02227633-e316-43f5-a4f7-9b77a76f30d9\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-z44ln" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.312977 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce71fc42-1327-4ce2-8753-feda68799f6c-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-bxl82\" (UID: \"ce71fc42-1327-4ce2-8753-feda68799f6c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bxl82" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.330466 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq4xz\" (UniqueName: \"kubernetes.io/projected/42d215e6-741b-4710-a7e9-b7944f744f0b-kube-api-access-wq4xz\") pod \"cluster-samples-operator-6b564684c8-sbhlq\" (UID: \"42d215e6-741b-4710-a7e9-b7944f744f0b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sbhlq" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.352231 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-h5dj4" event={"ID":"1a21f262-041c-4938-bf1c-9ba06822ff62","Type":"ContainerStarted","Data":"fb003f1e2899834be4daf57924c650a8ba320103d2e337cdfaf1e9cf7796b480"} Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.355177 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c2c83f90-fa38-4d74-a07c-8cb71f20c3eb-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-9z4ll\" (UID: \"c2c83f90-fa38-4d74-a07c-8cb71f20c3eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-9z4ll" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.362494 5125 generic.go:358] "Generic (PLEG): container finished" podID="fbd52e79-1f71-46e5-8170-270ba85e62df" containerID="0510513a5555a14e65d4861011b9f3e3b04ce192c1b1fd7206691b1908331323" exitCode=0 Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.362574 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" event={"ID":"fbd52e79-1f71-46e5-8170-270ba85e62df","Type":"ContainerDied","Data":"0510513a5555a14e65d4861011b9f3e3b04ce192c1b1fd7206691b1908331323"} Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.366132 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbcng\" (UniqueName: \"kubernetes.io/projected/3d5b91de-c016-4a44-aab6-910f036d51ae-kube-api-access-wbcng\") pod \"router-default-68cf44c8b8-kr9dh\" (UID: \"3d5b91de-c016-4a44-aab6-910f036d51ae\") " pod="openshift-ingress/router-default-68cf44c8b8-kr9dh" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.377305 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-fwfm2" event={"ID":"a7df9f2f-5671-4d9d-a30c-e2d504d7d7f1","Type":"ContainerStarted","Data":"0d7eb0cf207441829493188b98d56acbc047d8436ff61052928601d31e61a886"} Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.377355 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-fwfm2" event={"ID":"a7df9f2f-5671-4d9d-a30c-e2d504d7d7f1","Type":"ContainerStarted","Data":"9bd2999d4fe1eddcb5dfe30a3f53a8106c7f262736e6e07d87f91c4280b80fee"} Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.382042 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-bfrm9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.384671 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-tm7d5" event={"ID":"69e82e98-c3d1-4cdd-9657-609e9e9b78d0","Type":"ContainerStarted","Data":"2da4777d9d93a9b2c43bcb6570a24d49ff32514662c4e999341425557a65fb24"} Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.394053 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.394298 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" event={"ID":"cdb7a298-ac30-410b-9ab7-a060a428e88b","Type":"ContainerStarted","Data":"18077ee2d09c52b4f773f68b9c88e0c9f6e8ae990b8d07b94a0e92eeb4b42499"} Dec 08 19:31:00 crc kubenswrapper[5125]: E1208 19:31:00.394470 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:00.894448925 +0000 UTC m=+117.664939209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.394621 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.394916 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:31:00 crc kubenswrapper[5125]: E1208 19:31:00.395108 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:00.895097692 +0000 UTC m=+117.665587966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.399109 5125 generic.go:358] "Generic (PLEG): container finished" podID="fb139a6e-970e-4662-8bef-8155c86676c4" containerID="02cb20005dfc56e3d52b76a3fef13377e6267a5a43e55e4c9d609dfd5617e5fd" exitCode=0 Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.399163 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" event={"ID":"fb139a6e-970e-4662-8bef-8155c86676c4","Type":"ContainerDied","Data":"02cb20005dfc56e3d52b76a3fef13377e6267a5a43e55e4c9d609dfd5617e5fd"} Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.399226 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" event={"ID":"fb139a6e-970e-4662-8bef-8155c86676c4","Type":"ContainerStarted","Data":"09c869741c2fb8f729663b5f7100043d8a7652536e5efbcd3d6c66de323b8de9"} Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.399239 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" event={"ID":"fb139a6e-970e-4662-8bef-8155c86676c4","Type":"ContainerStarted","Data":"88d12e3921550437f4cf584644c0facada20704c1d1095094945627febe22223"} Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.401513 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h2vx\" (UniqueName: \"kubernetes.io/projected/c46131b3-44f8-4a83-a357-31ca0197d1be-kube-api-access-8h2vx\") pod \"downloads-747b44746d-t8fbs\" (UID: \"c46131b3-44f8-4a83-a357-31ca0197d1be\") " pod="openshift-console/downloads-747b44746d-t8fbs" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.402319 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-5xzhq" event={"ID":"965dbdfc-98cd-4eea-847b-36256d95a95e","Type":"ContainerStarted","Data":"052c19a888a7dcea7b11c85231bb2be769672fb3410b87259cdbb262e41a36e0"} Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.402352 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-5xzhq" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.405297 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/51fe67ff-4e90-4add-8447-58edc3e3d117-bound-sa-token\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.408286 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-8pnd7" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.416596 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sbhlq" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.423843 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-t2wgb" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.431015 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8xhhx" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.438628 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7hc5\" (UniqueName: \"kubernetes.io/projected/02227633-e316-43f5-a4f7-9b77a76f30d9-kube-api-access-z7hc5\") pod \"service-ca-operator-5b9c976747-z44ln\" (UID: \"02227633-e316-43f5-a4f7-9b77a76f30d9\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-z44ln" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.443694 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-z44ln" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.483538 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxbf6\" (UniqueName: \"kubernetes.io/projected/8e26b397-18c5-4b0e-a483-943460e35c11-kube-api-access-lxbf6\") pod \"multus-admission-controller-69db94689b-ls2zg\" (UID: \"8e26b397-18c5-4b0e-a483-943460e35c11\") " pod="openshift-multus/multus-admission-controller-69db94689b-ls2zg" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.496568 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:00 crc kubenswrapper[5125]: E1208 19:31:00.498025 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:00.998003951 +0000 UTC m=+117.768494225 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.499270 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d09591c3-30e3-44ab-88d3-91833456f731-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-rd56f\" (UID: \"d09591c3-30e3-44ab-88d3-91833456f731\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rd56f" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.501904 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1ba9072a-064d-4d53-b64b-7315a955f22f-tmpfs\") pod \"olm-operator-5cdf44d969-m22vv\" (UID: \"1ba9072a-064d-4d53-b64b-7315a955f22f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-m22vv" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.502254 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9272dafd-6843-41b9-bff8-998f3fd23d33-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-gsksl\" (UID: \"9272dafd-6843-41b9-bff8-998f3fd23d33\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-gsksl" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.507153 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2c83f90-fa38-4d74-a07c-8cb71f20c3eb-config\") pod \"openshift-kube-scheduler-operator-54f497555d-9z4ll\" (UID: \"c2c83f90-fa38-4d74-a07c-8cb71f20c3eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-9z4ll" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.510290 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz9vq\" (UniqueName: \"kubernetes.io/projected/9272dafd-6843-41b9-bff8-998f3fd23d33-kube-api-access-hz9vq\") pod \"machine-config-controller-f9cdd68f7-gsksl\" (UID: \"9272dafd-6843-41b9-bff8-998f3fd23d33\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-gsksl" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.521248 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-ls2zg" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.533815 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdm9x\" (UniqueName: \"kubernetes.io/projected/c0536d1a-6ff6-4063-a0e0-562241238b5b-kube-api-access-gdm9x\") pod \"ingress-canary-ghhd8\" (UID: \"c0536d1a-6ff6-4063-a0e0-562241238b5b\") " pod="openshift-ingress-canary/ingress-canary-ghhd8" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.545418 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e433cbb2-0dab-4949-93f4-beb4675b4117-kube-api-access\") pod \"kube-apiserver-operator-575994946d-xcz9f\" (UID: \"e433cbb2-0dab-4949-93f4-beb4675b4117\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xcz9f" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.580526 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5vm6\" (UniqueName: \"kubernetes.io/projected/437bd009-7a16-4598-9da1-57f4ca950147-kube-api-access-h5vm6\") pod \"machine-config-operator-67c9d58cbb-tg5m9\" (UID: \"437bd009-7a16-4598-9da1-57f4ca950147\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tg5m9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.590317 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9dgl\" (UniqueName: \"kubernetes.io/projected/ea8b2aeb-7cb2-4d20-920d-4c97bef6a2fd-kube-api-access-g9dgl\") pod \"packageserver-7d4fc7d867-jskvf\" (UID: \"ea8b2aeb-7cb2-4d20-920d-4c97bef6a2fd\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jskvf" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.597758 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.599331 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:00 crc kubenswrapper[5125]: E1208 19:31:00.600976 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:01.100964473 +0000 UTC m=+117.871454737 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.610037 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-kr9dh" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.626387 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-gsksl" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.635942 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-75zwt\" (UniqueName: \"kubernetes.io/projected/1c52381c-f0cb-4cf1-992d-60d930ba7d00-kube-api-access-75zwt\") pod \"console-operator-67c89758df-h2kxl\" (UID: \"1c52381c-f0cb-4cf1-992d-60d930ba7d00\") " pod="openshift-console-operator/console-operator-67c89758df-h2kxl" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.636759 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkjnw\" (UniqueName: \"kubernetes.io/projected/601959f9-7e12-4ca5-9856-f962f3929720-kube-api-access-qkjnw\") pod \"control-plane-machine-set-operator-75ffdb6fcd-nblg9\" (UID: \"601959f9-7e12-4ca5-9856-f962f3929720\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nblg9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.645559 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tg5m9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.646092 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkls9\" (UniqueName: \"kubernetes.io/projected/7d3d93c9-073e-4463-ad22-0dc846df2d84-kube-api-access-mkls9\") pod \"cni-sysctl-allowlist-ds-x7zl6\" (UID: \"7d3d93c9-073e-4463-ad22-0dc846df2d84\") " pod="openshift-multus/cni-sysctl-allowlist-ds-x7zl6" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.664396 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-bfrm9"] Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.665572 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xcz9f" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.673135 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nblg9" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.690332 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-t8fbs" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.696703 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjgmj\" (UniqueName: \"kubernetes.io/projected/2eb0798c-5c61-48ac-bf09-2f21642e8e53-kube-api-access-sjgmj\") pod \"dns-default-bhnwz\" (UID: \"2eb0798c-5c61-48ac-bf09-2f21642e8e53\") " pod="openshift-dns/dns-default-bhnwz" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.700263 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bw68\" (UniqueName: \"kubernetes.io/projected/77083b49-6a76-42e1-9f35-4b34306c23d3-kube-api-access-6bw68\") pod \"marketplace-operator-547dbd544d-75h8s\" (UID: \"77083b49-6a76-42e1-9f35-4b34306c23d3\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-75h8s" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.700994 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-h2kxl" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.701157 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:00 crc kubenswrapper[5125]: E1208 19:31:00.701532 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:01.20151546 +0000 UTC m=+117.972005734 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.706945 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-x7zl6" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.708075 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhdvt\" (UniqueName: \"kubernetes.io/projected/c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1-kube-api-access-fhdvt\") pod \"csi-hostpathplugin-jrtpt\" (UID: \"c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1\") " pod="hostpath-provisioner/csi-hostpathplugin-jrtpt" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.722804 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jskvf" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.730142 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsb4h\" (UniqueName: \"kubernetes.io/projected/ce71fc42-1327-4ce2-8753-feda68799f6c-kube-api-access-nsb4h\") pod \"kube-storage-version-migrator-operator-565b79b866-bxl82\" (UID: \"ce71fc42-1327-4ce2-8753-feda68799f6c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bxl82" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.738531 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-9z4ll" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.748243 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm78l\" (UniqueName: \"kubernetes.io/projected/1ba9072a-064d-4d53-b64b-7315a955f22f-kube-api-access-mm78l\") pod \"olm-operator-5cdf44d969-m22vv\" (UID: \"1ba9072a-064d-4d53-b64b-7315a955f22f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-m22vv" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.770696 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7vmc\" (UniqueName: \"kubernetes.io/projected/839fbd69-7068-47fa-94aa-8af954d8cbc9-kube-api-access-x7vmc\") pod \"package-server-manager-77f986bd66-4jn6q\" (UID: \"839fbd69-7068-47fa-94aa-8af954d8cbc9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4jn6q" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.773961 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-jrtpt" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.794359 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgkdt\" (UniqueName: \"kubernetes.io/projected/24b10400-a42c-4ba4-a4fe-37c3ba5017ed-kube-api-access-lgkdt\") pod \"catalog-operator-75ff9f647d-b2vgx\" (UID: \"24b10400-a42c-4ba4-a4fe-37c3ba5017ed\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-b2vgx" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.802891 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5lb9\" (UniqueName: \"kubernetes.io/projected/f078a28d-3d9d-41a2-b283-7c1f76ebbfc9-kube-api-access-q5lb9\") pod \"collect-profiles-29420370-b586h\" (UID: \"f078a28d-3d9d-41a2-b283-7c1f76ebbfc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-b586h" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.803813 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:00 crc kubenswrapper[5125]: E1208 19:31:00.804152 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:01.304137162 +0000 UTC m=+118.074627436 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.804416 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sbhlq"] Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.812872 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-m22vv" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.815090 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8xhhx"] Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.825195 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-ghhd8" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.828289 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztz6q\" (UniqueName: \"kubernetes.io/projected/dbdef7b8-f28d-4bb3-aff4-33b97ff1e415-kube-api-access-ztz6q\") pod \"migrator-866fcbc849-44sgb\" (UID: \"dbdef7b8-f28d-4bb3-aff4-33b97ff1e415\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-44sgb" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.833841 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-bhnwz" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.865528 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6h6l\" (UniqueName: \"kubernetes.io/projected/c839200b-2680-46bc-bcfe-30b5dd4e5d03-kube-api-access-s6h6l\") pod \"machine-config-server-2rxc6\" (UID: \"c839200b-2680-46bc-bcfe-30b5dd4e5d03\") " pod="openshift-machine-config-operator/machine-config-server-2rxc6" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.884935 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-t2wgb"] Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.885772 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfb8w\" (UniqueName: \"kubernetes.io/projected/e6991c6c-a51b-4cb6-a726-99f42a49e693-kube-api-access-zfb8w\") pod \"service-ca-74545575db-h6j6c\" (UID: \"e6991c6c-a51b-4cb6-a726-99f42a49e693\") " pod="openshift-service-ca/service-ca-74545575db-h6j6c" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.893075 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-z44ln"] Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.901390 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.910342 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:00 crc kubenswrapper[5125]: E1208 19:31:00.910450 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:01.410428253 +0000 UTC m=+118.180918537 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.910760 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:00 crc kubenswrapper[5125]: E1208 19:31:00.911038 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:01.411027669 +0000 UTC m=+118.181517943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.918568 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-whx82\" (UniqueName: \"kubernetes.io/projected/d09591c3-30e3-44ab-88d3-91833456f731-kube-api-access-whx82\") pod \"ingress-operator-6b9cb4dbcf-rd56f\" (UID: \"d09591c3-30e3-44ab-88d3-91833456f731\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rd56f" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.958312 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-75h8s" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.972072 5125 ???:1] "http: TLS handshake error from 192.168.126.11:39160: no serving certificate available for the kubelet" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.981005 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-7qxb2" podStartSLOduration=97.980985735 podStartE2EDuration="1m37.980985735s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:00.979556537 +0000 UTC m=+117.750046821" watchObservedRunningTime="2025-12-08 19:31:00.980985735 +0000 UTC m=+117.751476009" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.983847 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bxl82" Dec 08 19:31:00 crc kubenswrapper[5125]: I1208 19:31:00.993562 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rd56f" Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.011947 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:01 crc kubenswrapper[5125]: E1208 19:31:01.012545 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:01.512529792 +0000 UTC m=+118.283020066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.014039 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4jn6q" Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.028553 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-h6j6c" Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.034503 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-b2vgx" Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.080756 5125 ???:1] "http: TLS handshake error from 192.168.126.11:39176: no serving certificate available for the kubelet" Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.089138 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-44sgb" Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.098333 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-b586h" Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.110150 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-ls2zg"] Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.120461 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:01 crc kubenswrapper[5125]: E1208 19:31:01.120851 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:01.620835796 +0000 UTC m=+118.391326070 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.142234 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-ckwnh" podStartSLOduration=98.14221932 podStartE2EDuration="1m38.14221932s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:01.140291828 +0000 UTC m=+117.910782112" watchObservedRunningTime="2025-12-08 19:31:01.14221932 +0000 UTC m=+117.912709594" Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.142622 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2rxc6" Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.172791 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-7qjcm" podStartSLOduration=98.172770969 podStartE2EDuration="1m38.172770969s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:01.172633825 +0000 UTC m=+117.943124119" watchObservedRunningTime="2025-12-08 19:31:01.172770969 +0000 UTC m=+117.943261263" Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.181027 5125 ???:1] "http: TLS handshake error from 192.168.126.11:39182: no serving certificate available for the kubelet" Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.221165 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xcz9f"] Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.221950 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:01 crc kubenswrapper[5125]: E1208 19:31:01.222304 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:01.722285357 +0000 UTC m=+118.492775641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.305081 5125 ???:1] "http: TLS handshake error from 192.168.126.11:39188: no serving certificate available for the kubelet" Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.326993 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:01 crc kubenswrapper[5125]: E1208 19:31:01.331876 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:01.831784094 +0000 UTC m=+118.602274368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.371549 5125 ???:1] "http: TLS handshake error from 192.168.126.11:39190: no serving certificate available for the kubelet" Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.402385 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tg5m9"] Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.416450 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-ls2zg" event={"ID":"8e26b397-18c5-4b0e-a483-943460e35c11","Type":"ContainerStarted","Data":"a22a226e142d8562791457de1807e5f641fb5f09405eed4f0d61eb37d15c2f61"} Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.418849 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-x7zl6" event={"ID":"7d3d93c9-073e-4463-ad22-0dc846df2d84","Type":"ContainerStarted","Data":"673c95f41551e5a0fa3b7e72473a2f765688c82ff897f75ebe851f7d8726c49e"} Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.426748 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-kr9dh" event={"ID":"3d5b91de-c016-4a44-aab6-910f036d51ae","Type":"ContainerStarted","Data":"3772d4350e344d0071412f09378ee58896fc40114f5707b8c18bac843829185a"} Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.426793 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-kr9dh" event={"ID":"3d5b91de-c016-4a44-aab6-910f036d51ae","Type":"ContainerStarted","Data":"6a915e6021ffcec817a39996946ef412cb6a55e97a4901e31366f502194ab3da"} Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.428945 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:01 crc kubenswrapper[5125]: E1208 19:31:01.429264 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:01.929250678 +0000 UTC m=+118.699740952 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.435216 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8xhhx" event={"ID":"dd5a81f9-3ca2-4c34-9160-5db0dd237f3c","Type":"ContainerStarted","Data":"2a49b1495a010299abf2b6bdbd3afd14b9a025bda64b5547d422dbeb944e6a44"} Dec 08 19:31:01 crc kubenswrapper[5125]: W1208 19:31:01.436183 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode433cbb2_0dab_4949_93f4_beb4675b4117.slice/crio-8198483cf3b2b34c611be5d0e7f382e597af650e8c2f9be1be2b8087f01a0aea WatchSource:0}: Error finding container 8198483cf3b2b34c611be5d0e7f382e597af650e8c2f9be1be2b8087f01a0aea: Status 404 returned error can't find the container with id 8198483cf3b2b34c611be5d0e7f382e597af650e8c2f9be1be2b8087f01a0aea Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.436384 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-t2wgb" event={"ID":"81694063-8439-4d15-8673-30e88676f33e","Type":"ContainerStarted","Data":"e5800063869d27e2eb2215304408fb4ce96fda58604330440a55c34dcad1ab2e"} Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.439381 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sbhlq" event={"ID":"42d215e6-741b-4710-a7e9-b7944f744f0b","Type":"ContainerStarted","Data":"ec887c803f01856685989b407e9f4aee7864adc81fa070f1843e9da11478315b"} Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.451138 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-z44ln" event={"ID":"02227633-e316-43f5-a4f7-9b77a76f30d9","Type":"ContainerStarted","Data":"d1e1e79d673f33146a70132dac3f678193e9fc2098523cdb97648536f2ffe5c8"} Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.452942 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-bfrm9" event={"ID":"4d6421d4-f996-4c24-88de-d0cd3aee5aec","Type":"ContainerStarted","Data":"62ba03c3fdf386585181b9043da3266ecdc69160bd992642f64b969ce3e85d90"} Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.469249 5125 ???:1] "http: TLS handshake error from 192.168.126.11:39200: no serving certificate available for the kubelet" Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.483631 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-gsksl"] Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.532650 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:01 crc kubenswrapper[5125]: E1208 19:31:01.533239 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:02.033222637 +0000 UTC m=+118.803712911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.573114 5125 ???:1] "http: TLS handshake error from 192.168.126.11:39208: no serving certificate available for the kubelet" Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.625198 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-kr9dh" Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.633673 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:01 crc kubenswrapper[5125]: E1208 19:31:01.635628 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:02.135588352 +0000 UTC m=+118.906078626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.654632 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-cdw7h" podStartSLOduration=98.654600182 podStartE2EDuration="1m38.654600182s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:01.620049135 +0000 UTC m=+118.390539419" watchObservedRunningTime="2025-12-08 19:31:01.654600182 +0000 UTC m=+118.425090456" Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.679168 5125 ???:1] "http: TLS handshake error from 192.168.126.11:39220: no serving certificate available for the kubelet" Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.738558 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:01 crc kubenswrapper[5125]: E1208 19:31:01.739144 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:02.239126649 +0000 UTC m=+119.009616933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.771844 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-8pnd7" podStartSLOduration=98.771828586 podStartE2EDuration="1m38.771828586s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:01.769924575 +0000 UTC m=+118.540414849" watchObservedRunningTime="2025-12-08 19:31:01.771828586 +0000 UTC m=+118.542318860" Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.839758 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:01 crc kubenswrapper[5125]: E1208 19:31:01.840054 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:02.340009315 +0000 UTC m=+119.110499589 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.840528 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:01 crc kubenswrapper[5125]: E1208 19:31:01.840847 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:02.340839197 +0000 UTC m=+119.111329471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.903363 5125 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kr9dh container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.903422 5125 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kr9dh" podUID="3d5b91de-c016-4a44-aab6-910f036d51ae" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.945381 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:01 crc kubenswrapper[5125]: E1208 19:31:01.945516 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:02.445496093 +0000 UTC m=+119.215986377 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.945657 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:01 crc kubenswrapper[5125]: E1208 19:31:01.946092 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:02.44608212 +0000 UTC m=+119.216572394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:01 crc kubenswrapper[5125]: I1208 19:31:01.975418 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v" podStartSLOduration=97.975395816 podStartE2EDuration="1m37.975395816s" podCreationTimestamp="2025-12-08 19:29:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:01.974082751 +0000 UTC m=+118.744573025" watchObservedRunningTime="2025-12-08 19:31:01.975395816 +0000 UTC m=+118.745886080" Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.047704 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:02 crc kubenswrapper[5125]: E1208 19:31:02.047869 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:02.547820578 +0000 UTC m=+119.318310852 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.051564 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:02 crc kubenswrapper[5125]: E1208 19:31:02.051928 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:02.551913168 +0000 UTC m=+119.322403442 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.085519 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jskvf"] Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.130441 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-9z4ll"] Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.139547 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nblg9"] Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.158027 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:02 crc kubenswrapper[5125]: E1208 19:31:02.158212 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:02.658172368 +0000 UTC m=+119.428662642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.158526 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:02 crc kubenswrapper[5125]: E1208 19:31:02.158991 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:02.65897472 +0000 UTC m=+119.429464994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.231971 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-t8fbs"] Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.260000 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:02 crc kubenswrapper[5125]: E1208 19:31:02.261235 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:02.761213642 +0000 UTC m=+119.531703916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:02 crc kubenswrapper[5125]: W1208 19:31:02.277747 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod601959f9_7e12_4ca5_9856_f962f3929720.slice/crio-cf75d07f119b1cc194b8e85f1cc1c72755b5536799fe7c1aa3b1d57facf73aac WatchSource:0}: Error finding container cf75d07f119b1cc194b8e85f1cc1c72755b5536799fe7c1aa3b1d57facf73aac: Status 404 returned error can't find the container with id cf75d07f119b1cc194b8e85f1cc1c72755b5536799fe7c1aa3b1d57facf73aac Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.359890 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" podStartSLOduration=99.359869667 podStartE2EDuration="1m39.359869667s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:02.345138903 +0000 UTC m=+119.115629187" watchObservedRunningTime="2025-12-08 19:31:02.359869667 +0000 UTC m=+119.130359941" Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.388488 5125 ???:1] "http: TLS handshake error from 192.168.126.11:39224: no serving certificate available for the kubelet" Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.391275 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:02 crc kubenswrapper[5125]: E1208 19:31:02.391944 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:02.891925697 +0000 UTC m=+119.662415971 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.430411 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-m22vv"] Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.430530 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-kr9dh" podStartSLOduration=99.430520303 podStartE2EDuration="1m39.430520303s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:02.407274049 +0000 UTC m=+119.177764323" watchObservedRunningTime="2025-12-08 19:31:02.430520303 +0000 UTC m=+119.201010587" Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.438803 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-h5dj4" podStartSLOduration=99.438785504 podStartE2EDuration="1m39.438785504s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:02.437831528 +0000 UTC m=+119.208321812" watchObservedRunningTime="2025-12-08 19:31:02.438785504 +0000 UTC m=+119.209275778" Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.494566 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:02 crc kubenswrapper[5125]: E1208 19:31:02.494726 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:02.994692324 +0000 UTC m=+119.765182598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.494864 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:02 crc kubenswrapper[5125]: E1208 19:31:02.495211 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:02.995202347 +0000 UTC m=+119.765692621 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.496900 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-bhnwz"] Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.511809 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-fwfm2" podStartSLOduration=99.511797202 podStartE2EDuration="1m39.511797202s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:02.510869037 +0000 UTC m=+119.281359321" watchObservedRunningTime="2025-12-08 19:31:02.511797202 +0000 UTC m=+119.282287466" Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.609100 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:02 crc kubenswrapper[5125]: E1208 19:31:02.609455 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:03.10943448 +0000 UTC m=+119.879924754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.615250 5125 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kr9dh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 19:31:02 crc kubenswrapper[5125]: [-]has-synced failed: reason withheld Dec 08 19:31:02 crc kubenswrapper[5125]: [+]process-running ok Dec 08 19:31:02 crc kubenswrapper[5125]: healthz check failed Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.615309 5125 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kr9dh" podUID="3d5b91de-c016-4a44-aab6-910f036d51ae" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.636047 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-z44ln" event={"ID":"02227633-e316-43f5-a4f7-9b77a76f30d9","Type":"ContainerStarted","Data":"49480d072903fec01bf15be5490c10bff320451bcb415374c01b6d7fea6649cd"} Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.648383 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-bfrm9" event={"ID":"4d6421d4-f996-4c24-88de-d0cd3aee5aec","Type":"ContainerStarted","Data":"cde2bd5235117eaaabec2f88ce3599c3ec52daebe4e44296274072b27b47341a"} Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.650167 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-5xzhq" podStartSLOduration=99.650154753 podStartE2EDuration="1m39.650154753s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:02.636443875 +0000 UTC m=+119.406934159" watchObservedRunningTime="2025-12-08 19:31:02.650154753 +0000 UTC m=+119.420645027" Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.654762 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-h2kxl"] Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.657825 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-jrtpt"] Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.710889 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:02 crc kubenswrapper[5125]: E1208 19:31:02.711705 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:03.211689314 +0000 UTC m=+119.982179588 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.726145 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-9z4ll" event={"ID":"c2c83f90-fa38-4d74-a07c-8cb71f20c3eb","Type":"ContainerStarted","Data":"686ebae535b6485809c6e3fb00be37d58505532e44a51f0f1a58db7ada90f0af"} Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.736720 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-m22vv" event={"ID":"1ba9072a-064d-4d53-b64b-7315a955f22f","Type":"ContainerStarted","Data":"020f919bfffe0ed988d7cd44bbf761233d0ec7d272989319421c02e3b0e006bd"} Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.737675 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2rxc6" event={"ID":"c839200b-2680-46bc-bcfe-30b5dd4e5d03","Type":"ContainerStarted","Data":"ea6e2c81a8933f80b065db1e7223cdb766242441f2b86c01c318648972ea820b"} Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.742513 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4jn6q"] Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.744582 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-b2vgx"] Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.758300 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-75h8s"] Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.758781 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-gsksl" event={"ID":"9272dafd-6843-41b9-bff8-998f3fd23d33","Type":"ContainerStarted","Data":"e859f63979505ec791d4715f30cbe4820e3b8446200926b46b2c42b9bbb2c14f"} Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.758817 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-gsksl" event={"ID":"9272dafd-6843-41b9-bff8-998f3fd23d33","Type":"ContainerStarted","Data":"8016083887d118b09553225e907a6f41a93c317ea013520850f5fe62d45ee2d1"} Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.788923 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bxl82"] Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.789900 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tg5m9" event={"ID":"437bd009-7a16-4598-9da1-57f4ca950147","Type":"ContainerStarted","Data":"5782cd9e1c88e1e6c511823c9fed2196d45bec80b99d736c43297f64e296a380"} Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.789923 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tg5m9" event={"ID":"437bd009-7a16-4598-9da1-57f4ca950147","Type":"ContainerStarted","Data":"26e7e05b4f8ec3671db34b1729b3af88218c6af8ce5df67a9a24a59b47a2f93a"} Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.790396 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-44sgb"] Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.791119 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-t8fbs" event={"ID":"c46131b3-44f8-4a83-a357-31ca0197d1be","Type":"ContainerStarted","Data":"35e83eb53ab1c3be7513613e727a9d914803b16c72862ae5fcadea4b45d20c8a"} Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.792134 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420370-b586h"] Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.792431 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nblg9" event={"ID":"601959f9-7e12-4ca5-9856-f962f3929720","Type":"ContainerStarted","Data":"cf75d07f119b1cc194b8e85f1cc1c72755b5536799fe7c1aa3b1d57facf73aac"} Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.794069 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rd56f"] Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.800988 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-ghhd8"] Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.811215 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" podStartSLOduration=99.811189942 podStartE2EDuration="1m39.811189942s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:02.756213937 +0000 UTC m=+119.526704231" watchObservedRunningTime="2025-12-08 19:31:02.811189942 +0000 UTC m=+119.581680236" Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.812597 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.813317 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-tm7d5" podStartSLOduration=99.813307519 podStartE2EDuration="1m39.813307519s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:02.775371081 +0000 UTC m=+119.545861365" watchObservedRunningTime="2025-12-08 19:31:02.813307519 +0000 UTC m=+119.583797793" Dec 08 19:31:02 crc kubenswrapper[5125]: E1208 19:31:02.814159 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:03.314138241 +0000 UTC m=+120.084628525 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.816774 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-bfrm9" podStartSLOduration=99.816764442 podStartE2EDuration="1m39.816764442s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:02.795560792 +0000 UTC m=+119.566051076" watchObservedRunningTime="2025-12-08 19:31:02.816764442 +0000 UTC m=+119.587254716" Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.820509 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" event={"ID":"fbd52e79-1f71-46e5-8170-270ba85e62df","Type":"ContainerStarted","Data":"074200c99d4b786f46f9b4c28c2e7c2d7299454f14e063408519297a8e62363c"} Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.826248 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-z44ln" podStartSLOduration=98.826235385 podStartE2EDuration="1m38.826235385s" podCreationTimestamp="2025-12-08 19:29:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:02.824185021 +0000 UTC m=+119.594675305" watchObservedRunningTime="2025-12-08 19:31:02.826235385 +0000 UTC m=+119.596725659" Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.828592 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-t2wgb" event={"ID":"81694063-8439-4d15-8673-30e88676f33e","Type":"ContainerStarted","Data":"6b10d93bd69ddd251abf87b1555be2f9761537c49cef65437de6ec881b365dbd"} Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.847889 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jskvf" event={"ID":"ea8b2aeb-7cb2-4d20-920d-4c97bef6a2fd","Type":"ContainerStarted","Data":"bb594677ec9adadd5b59b26f026a129d6bdbd19da1ca38d57e634e58acfe8c32"} Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.854667 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xcz9f" event={"ID":"e433cbb2-0dab-4949-93f4-beb4675b4117","Type":"ContainerStarted","Data":"8198483cf3b2b34c611be5d0e7f382e597af650e8c2f9be1be2b8087f01a0aea"} Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.857419 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" podStartSLOduration=99.857405722 podStartE2EDuration="1m39.857405722s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:02.85733437 +0000 UTC m=+119.627824664" watchObservedRunningTime="2025-12-08 19:31:02.857405722 +0000 UTC m=+119.627895996" Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.860084 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-h6j6c"] Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.877583 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-t2wgb" podStartSLOduration=99.877563172 podStartE2EDuration="1m39.877563172s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:02.876560735 +0000 UTC m=+119.647051029" watchObservedRunningTime="2025-12-08 19:31:02.877563172 +0000 UTC m=+119.648053456" Dec 08 19:31:02 crc kubenswrapper[5125]: I1208 19:31:02.914756 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:02 crc kubenswrapper[5125]: E1208 19:31:02.915195 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:03.415178671 +0000 UTC m=+120.185668945 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:03 crc kubenswrapper[5125]: I1208 19:31:03.020422 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:03 crc kubenswrapper[5125]: E1208 19:31:03.021107 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:03.521079101 +0000 UTC m=+120.291569385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:03 crc kubenswrapper[5125]: I1208 19:31:03.125476 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:03 crc kubenswrapper[5125]: E1208 19:31:03.126336 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:03.626315164 +0000 UTC m=+120.396805448 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:03 crc kubenswrapper[5125]: I1208 19:31:03.227084 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:03 crc kubenswrapper[5125]: E1208 19:31:03.227374 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:03.727357934 +0000 UTC m=+120.497848208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:03 crc kubenswrapper[5125]: I1208 19:31:03.271851 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:31:03 crc kubenswrapper[5125]: I1208 19:31:03.272770 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:31:03 crc kubenswrapper[5125]: I1208 19:31:03.293313 5125 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-v5nx6 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 08 19:31:03 crc kubenswrapper[5125]: [+]log ok Dec 08 19:31:03 crc kubenswrapper[5125]: [+]etcd ok Dec 08 19:31:03 crc kubenswrapper[5125]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 08 19:31:03 crc kubenswrapper[5125]: [+]poststarthook/generic-apiserver-start-informers ok Dec 08 19:31:03 crc kubenswrapper[5125]: [+]poststarthook/max-in-flight-filter ok Dec 08 19:31:03 crc kubenswrapper[5125]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 08 19:31:03 crc kubenswrapper[5125]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 08 19:31:03 crc kubenswrapper[5125]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 08 19:31:03 crc kubenswrapper[5125]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Dec 08 19:31:03 crc kubenswrapper[5125]: [+]poststarthook/project.openshift.io-projectcache ok Dec 08 19:31:03 crc kubenswrapper[5125]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 08 19:31:03 crc kubenswrapper[5125]: [+]poststarthook/openshift.io-startinformers ok Dec 08 19:31:03 crc kubenswrapper[5125]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 08 19:31:03 crc kubenswrapper[5125]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 08 19:31:03 crc kubenswrapper[5125]: livez check failed Dec 08 19:31:03 crc kubenswrapper[5125]: I1208 19:31:03.293388 5125 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" podUID="fb139a6e-970e-4662-8bef-8155c86676c4" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:03 crc kubenswrapper[5125]: I1208 19:31:03.332814 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:03 crc kubenswrapper[5125]: E1208 19:31:03.333646 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:03.833627814 +0000 UTC m=+120.604118088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:03 crc kubenswrapper[5125]: I1208 19:31:03.434323 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:03 crc kubenswrapper[5125]: E1208 19:31:03.434913 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:03.93489454 +0000 UTC m=+120.705384814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:03 crc kubenswrapper[5125]: I1208 19:31:03.536269 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:03 crc kubenswrapper[5125]: E1208 19:31:03.537054 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:04.037038849 +0000 UTC m=+120.807529123 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:03 crc kubenswrapper[5125]: I1208 19:31:03.603883 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" Dec 08 19:31:03 crc kubenswrapper[5125]: I1208 19:31:03.603928 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" Dec 08 19:31:03 crc kubenswrapper[5125]: I1208 19:31:03.613940 5125 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kr9dh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 19:31:03 crc kubenswrapper[5125]: [-]has-synced failed: reason withheld Dec 08 19:31:03 crc kubenswrapper[5125]: [+]process-running ok Dec 08 19:31:03 crc kubenswrapper[5125]: healthz check failed Dec 08 19:31:03 crc kubenswrapper[5125]: I1208 19:31:03.614001 5125 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kr9dh" podUID="3d5b91de-c016-4a44-aab6-910f036d51ae" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:03 crc kubenswrapper[5125]: I1208 19:31:03.618969 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" Dec 08 19:31:03 crc kubenswrapper[5125]: I1208 19:31:03.640544 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:03 crc kubenswrapper[5125]: E1208 19:31:03.641057 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:04.141035088 +0000 UTC m=+120.911525362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:03 crc kubenswrapper[5125]: I1208 19:31:03.734206 5125 ???:1] "http: TLS handshake error from 192.168.126.11:34760: no serving certificate available for the kubelet" Dec 08 19:31:03 crc kubenswrapper[5125]: I1208 19:31:03.743118 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:03 crc kubenswrapper[5125]: E1208 19:31:03.743450 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:04.243437625 +0000 UTC m=+121.013927899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:03 crc kubenswrapper[5125]: I1208 19:31:03.854750 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:03 crc kubenswrapper[5125]: E1208 19:31:03.855624 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:04.355575042 +0000 UTC m=+121.126065316 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:03 crc kubenswrapper[5125]: I1208 19:31:03.912643 5125 scope.go:117] "RemoveContainer" containerID="346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af" Dec 08 19:31:03 crc kubenswrapper[5125]: I1208 19:31:03.924380 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2rxc6" event={"ID":"c839200b-2680-46bc-bcfe-30b5dd4e5d03","Type":"ContainerStarted","Data":"b93629f962af3b29ddc7e83b35ef0a10bef45f7f2159a51c116e5c45b3de7ea2"} Dec 08 19:31:03 crc kubenswrapper[5125]: I1208 19:31:03.947123 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-gsksl" event={"ID":"9272dafd-6843-41b9-bff8-998f3fd23d33","Type":"ContainerStarted","Data":"5b288badf37ffd57ffd9559c55c675c98447534cfc5f00dd19f73cfd0aa69cad"} Dec 08 19:31:03 crc kubenswrapper[5125]: I1208 19:31:03.950846 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-75h8s" event={"ID":"77083b49-6a76-42e1-9f35-4b34306c23d3","Type":"ContainerStarted","Data":"888b9328d7f7f9d29a4a3c3048a8ca56fc7b82b46ffa17f29d175421683f52dd"} Dec 08 19:31:03 crc kubenswrapper[5125]: I1208 19:31:03.950890 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-75h8s" event={"ID":"77083b49-6a76-42e1-9f35-4b34306c23d3","Type":"ContainerStarted","Data":"0745172ec75d8a72ba45ea814ca563051989e1f665af3892649a2166bca58b37"} Dec 08 19:31:03 crc kubenswrapper[5125]: I1208 19:31:03.951540 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-75h8s" Dec 08 19:31:03 crc kubenswrapper[5125]: I1208 19:31:03.959724 5125 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-75h8s container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Dec 08 19:31:03 crc kubenswrapper[5125]: I1208 19:31:03.959782 5125 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-75h8s" podUID="77083b49-6a76-42e1-9f35-4b34306c23d3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Dec 08 19:31:03 crc kubenswrapper[5125]: I1208 19:31:03.960881 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-h6j6c" event={"ID":"e6991c6c-a51b-4cb6-a726-99f42a49e693","Type":"ContainerStarted","Data":"9058a70474044f780dc0d3548d191ad2c46588f089578422ca7c3b0f014a3789"} Dec 08 19:31:03 crc kubenswrapper[5125]: I1208 19:31:03.960911 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-h6j6c" event={"ID":"e6991c6c-a51b-4cb6-a726-99f42a49e693","Type":"ContainerStarted","Data":"864d6991faf62354a1a4f29774399cd402788432517f38ffc1d9bdb8f90041a2"} Dec 08 19:31:03 crc kubenswrapper[5125]: I1208 19:31:03.960883 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:03 crc kubenswrapper[5125]: E1208 19:31:03.961093 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:04.461081862 +0000 UTC m=+121.231572136 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.007064 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rd56f" event={"ID":"d09591c3-30e3-44ab-88d3-91833456f731","Type":"ContainerStarted","Data":"98a7c6d686e40ab8fa596b534b0bf0c4f4f766dc36aa2f0de136a195836394a1"} Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.024802 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-h2kxl" event={"ID":"1c52381c-f0cb-4cf1-992d-60d930ba7d00","Type":"ContainerStarted","Data":"f05bf8b40c2d8222011192b46fb4250d03d2e078a4043f484981f2537c22cf9f"} Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.026753 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-h2kxl" Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.030895 5125 patch_prober.go:28] interesting pod/console-operator-67c89758df-h2kxl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/readyz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.030952 5125 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-h2kxl" podUID="1c52381c-f0cb-4cf1-992d-60d930ba7d00" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/readyz\": dial tcp 10.217.0.19:8443: connect: connection refused" Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.057411 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-t8fbs" event={"ID":"c46131b3-44f8-4a83-a357-31ca0197d1be","Type":"ContainerStarted","Data":"345b5c372e8c2181ab3c999d7d0731a3c9be3b47a7ebd8510ba6f530576babae"} Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.058544 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-t8fbs" Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.064748 5125 patch_prober.go:28] interesting pod/downloads-747b44746d-t8fbs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.064807 5125 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-t8fbs" podUID="c46131b3-44f8-4a83-a357-31ca0197d1be" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.072764 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:04 crc kubenswrapper[5125]: E1208 19:31:04.075775 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:04.575752078 +0000 UTC m=+121.346242352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.076120 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:04 crc kubenswrapper[5125]: E1208 19:31:04.076485 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:04.576478547 +0000 UTC m=+121.346968821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.080326 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nblg9" event={"ID":"601959f9-7e12-4ca5-9856-f962f3929720","Type":"ContainerStarted","Data":"291db95ed41bdffb7466b33fba8c6b923b06296f4c0373135385f71ed33694af"} Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.093819 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-b586h" event={"ID":"f078a28d-3d9d-41a2-b283-7c1f76ebbfc9","Type":"ContainerStarted","Data":"63e85d506157742700fdf95e4223aec2234978285760745f4ce5d8ba8fd3fb4f"} Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.146906 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-x7zl6" event={"ID":"7d3d93c9-073e-4463-ad22-0dc846df2d84","Type":"ContainerStarted","Data":"b94019e1e3e8ae908735166d6af528e641dd7f24f090a51a6b8577a6548c2ff9"} Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.152056 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-x7zl6" Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.175921 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-44sgb" event={"ID":"dbdef7b8-f28d-4bb3-aff4-33b97ff1e415","Type":"ContainerStarted","Data":"40ff3054cfccc640480c0ac12858e078200db5a4973f7965ac06103703cf858d"} Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.176688 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:04 crc kubenswrapper[5125]: E1208 19:31:04.177839 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:04.677822316 +0000 UTC m=+121.448312590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.206709 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bhnwz" event={"ID":"2eb0798c-5c61-48ac-bf09-2f21642e8e53","Type":"ContainerStarted","Data":"a64dc39d36d7db8fcc40ef34adc85028cdfb3d5a28a5000e30fa47fd48b93cbf"} Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.206768 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bhnwz" event={"ID":"2eb0798c-5c61-48ac-bf09-2f21642e8e53","Type":"ContainerStarted","Data":"0af4bc0432bcf56f6cef0dff4d72e53816e9d8963661cb3ff0c08bff1f531a2d"} Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.221290 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jrtpt" event={"ID":"c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1","Type":"ContainerStarted","Data":"048c0eb70327bf601e44f768bc4541bcc16bb2ce7143a08e89fa760efd4e05bf"} Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.234926 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xcz9f" event={"ID":"e433cbb2-0dab-4949-93f4-beb4675b4117","Type":"ContainerStarted","Data":"5631977f07f303f932101fd6c09edb7bf59e419c950372d9352cc77bc687ca49"} Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.253199 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4jn6q" event={"ID":"839fbd69-7068-47fa-94aa-8af954d8cbc9","Type":"ContainerStarted","Data":"5a673c0a52d370b351b6ec50cf412e5e87bd98f170ee9dbc5c395445479afa8f"} Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.253761 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4jn6q" event={"ID":"839fbd69-7068-47fa-94aa-8af954d8cbc9","Type":"ContainerStarted","Data":"9c47264ca209966c464a36c93e0757dd3e0e4a82bdfde5a4c400c34fac0b39a6"} Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.254588 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4jn6q" Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.278377 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:04 crc kubenswrapper[5125]: E1208 19:31:04.278707 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:04.778692351 +0000 UTC m=+121.549182625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.289767 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-b2vgx" event={"ID":"24b10400-a42c-4ba4-a4fe-37c3ba5017ed","Type":"ContainerStarted","Data":"00ea324e560cc21fe250902338f43397eda927a34c5d5bb4a8f00f60969a15ee"} Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.289803 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-b2vgx" event={"ID":"24b10400-a42c-4ba4-a4fe-37c3ba5017ed","Type":"ContainerStarted","Data":"d5b8b45a69e16a2fbc041923074f281eae1e7bb563494b6773a185810c14e7a1"} Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.291286 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-b2vgx" Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.317290 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-9z4ll" event={"ID":"c2c83f90-fa38-4d74-a07c-8cb71f20c3eb","Type":"ContainerStarted","Data":"134a63847b0d5fa3db85514ddc7bdaa2126919559b6d25ed68ef8eb91dadfc7d"} Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.330801 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-b2vgx" Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.332770 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-x7zl6" Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.341088 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tg5m9" event={"ID":"437bd009-7a16-4598-9da1-57f4ca950147","Type":"ContainerStarted","Data":"25c12ae389d5c7fd4b9d8f52e23ec6956d835e8dc07a66c643ac5b457f08c9f9"} Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.352141 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-ls2zg" event={"ID":"8e26b397-18c5-4b0e-a483-943460e35c11","Type":"ContainerStarted","Data":"477d77971ce2952e8deba2545a7862d87f154bff16a733a39847c0e8645971cd"} Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.366554 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bxl82" event={"ID":"ce71fc42-1327-4ce2-8753-feda68799f6c","Type":"ContainerStarted","Data":"faee030e4efd154f22eb7ed6fe858f75070485c1a082d625a85d12313e88d021"} Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.366584 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bxl82" event={"ID":"ce71fc42-1327-4ce2-8753-feda68799f6c","Type":"ContainerStarted","Data":"bd0caff44aad13521abf3e219e79e83035e3fb05d0a222989de2c03e884a0f79"} Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.382859 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:04 crc kubenswrapper[5125]: E1208 19:31:04.384470 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:04.884454237 +0000 UTC m=+121.654944511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.401914 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8xhhx" event={"ID":"dd5a81f9-3ca2-4c34-9160-5db0dd237f3c","Type":"ContainerStarted","Data":"0414613f2111df3b53ce8f4127202b6a748c78bf3592ed28a39b55dbc58da0d3"} Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.416688 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jskvf" event={"ID":"ea8b2aeb-7cb2-4d20-920d-4c97bef6a2fd","Type":"ContainerStarted","Data":"22288b026b8310e0e930341bc7c5558305425d77d46e0d54d60fd5ec1d6e535c"} Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.417688 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jskvf" Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.437265 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-m22vv" event={"ID":"1ba9072a-064d-4d53-b64b-7315a955f22f","Type":"ContainerStarted","Data":"12b19b3e17cc289f6d5cc2482da0af7319e6a4f722973ab70d501d25492c7374"} Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.437921 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-m22vv" Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.461025 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-m22vv" Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.487168 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sbhlq" event={"ID":"42d215e6-741b-4710-a7e9-b7944f744f0b","Type":"ContainerStarted","Data":"e8c764c3d27db9fea81682b3652a6307f0c8f9ccceac65f543a0005963b5bd1f"} Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.487213 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sbhlq" event={"ID":"42d215e6-741b-4710-a7e9-b7944f744f0b","Type":"ContainerStarted","Data":"2355c840d60e9c4319f74bbf4ca85e48833f01e886c5600cb60eb9198fc83252"} Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.487554 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-5xzhq" Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.487687 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:04 crc kubenswrapper[5125]: E1208 19:31:04.489200 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:04.988986611 +0000 UTC m=+121.759476885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.523441 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-ghhd8" event={"ID":"c0536d1a-6ff6-4063-a0e0-562241238b5b","Type":"ContainerStarted","Data":"4ef3fdd7ab09f864447159ebd1366e67e128632ded0a4367c47152f82589c106"} Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.523486 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-ghhd8" event={"ID":"c0536d1a-6ff6-4063-a0e0-562241238b5b","Type":"ContainerStarted","Data":"5ab8de86b0117f49bb0eacf7908c426c3274ebe27d2e99eec7cfb331ab0fdd0d"} Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.534825 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-pkvvc" Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.590279 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:04 crc kubenswrapper[5125]: E1208 19:31:04.590738 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:05.09071463 +0000 UTC m=+121.861204904 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.591691 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:04 crc kubenswrapper[5125]: E1208 19:31:04.592435 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:05.092426305 +0000 UTC m=+121.862916579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.594136 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-b2vgx" podStartSLOduration=101.594125591 podStartE2EDuration="1m41.594125591s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:04.586986119 +0000 UTC m=+121.357476403" watchObservedRunningTime="2025-12-08 19:31:04.594125591 +0000 UTC m=+121.364615865" Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.621820 5125 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kr9dh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 19:31:04 crc kubenswrapper[5125]: [-]has-synced failed: reason withheld Dec 08 19:31:04 crc kubenswrapper[5125]: [+]process-running ok Dec 08 19:31:04 crc kubenswrapper[5125]: healthz check failed Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.621875 5125 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kr9dh" podUID="3d5b91de-c016-4a44-aab6-910f036d51ae" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.642492 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nblg9" podStartSLOduration=101.642472428 podStartE2EDuration="1m41.642472428s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:04.639162838 +0000 UTC m=+121.409653122" watchObservedRunningTime="2025-12-08 19:31:04.642472428 +0000 UTC m=+121.412962702" Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.675936 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jskvf" podStartSLOduration=101.675918764 podStartE2EDuration="1m41.675918764s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:04.673358766 +0000 UTC m=+121.443849050" watchObservedRunningTime="2025-12-08 19:31:04.675918764 +0000 UTC m=+121.446409048" Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.696229 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:04 crc kubenswrapper[5125]: E1208 19:31:04.696527 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:05.196506046 +0000 UTC m=+121.966996320 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.729537 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-tg5m9" podStartSLOduration=101.729517532 podStartE2EDuration="1m41.729517532s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:04.728508275 +0000 UTC m=+121.498998559" watchObservedRunningTime="2025-12-08 19:31:04.729517532 +0000 UTC m=+121.500007806" Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.741701 5125 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-jskvf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 08 19:31:04 crc kubenswrapper[5125]: [+]log ok Dec 08 19:31:04 crc kubenswrapper[5125]: [+]poststarthook/generic-apiserver-start-informers ok Dec 08 19:31:04 crc kubenswrapper[5125]: [-]poststarthook/max-in-flight-filter failed: reason withheld Dec 08 19:31:04 crc kubenswrapper[5125]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 08 19:31:04 crc kubenswrapper[5125]: healthz check failed Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.741770 5125 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jskvf" podUID="ea8b2aeb-7cb2-4d20-920d-4c97bef6a2fd" containerName="packageserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.796627 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4jn6q" podStartSLOduration=101.796594561 podStartE2EDuration="1m41.796594561s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:04.793142218 +0000 UTC m=+121.563632492" watchObservedRunningTime="2025-12-08 19:31:04.796594561 +0000 UTC m=+121.567084835" Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.797578 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:04 crc kubenswrapper[5125]: E1208 19:31:04.797875 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:05.297863444 +0000 UTC m=+122.068353718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.841112 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-xcz9f" podStartSLOduration=101.841096704 podStartE2EDuration="1m41.841096704s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:04.819523886 +0000 UTC m=+121.590014160" watchObservedRunningTime="2025-12-08 19:31:04.841096704 +0000 UTC m=+121.611586978" Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.841986 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-h2kxl" podStartSLOduration=101.841978208 podStartE2EDuration="1m41.841978208s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:04.839682506 +0000 UTC m=+121.610172800" watchObservedRunningTime="2025-12-08 19:31:04.841978208 +0000 UTC m=+121.612468482" Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.887521 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-bxl82" podStartSLOduration=101.887502689 podStartE2EDuration="1m41.887502689s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:04.884827078 +0000 UTC m=+121.655317392" watchObservedRunningTime="2025-12-08 19:31:04.887502689 +0000 UTC m=+121.657992953" Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.887809 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-b586h" podStartSLOduration=64.887804517 podStartE2EDuration="1m4.887804517s" podCreationTimestamp="2025-12-08 19:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:04.863732662 +0000 UTC m=+121.634222946" watchObservedRunningTime="2025-12-08 19:31:04.887804517 +0000 UTC m=+121.658294791" Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.899014 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:04 crc kubenswrapper[5125]: E1208 19:31:04.899327 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:05.399311916 +0000 UTC m=+122.169802190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.911428 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-9z4ll" podStartSLOduration=101.91138748 podStartE2EDuration="1m41.91138748s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:04.909330244 +0000 UTC m=+121.679820528" watchObservedRunningTime="2025-12-08 19:31:04.91138748 +0000 UTC m=+121.681877754" Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.925946 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-h6j6c" podStartSLOduration=100.9259281 podStartE2EDuration="1m40.9259281s" podCreationTimestamp="2025-12-08 19:29:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:04.924136752 +0000 UTC m=+121.694627016" watchObservedRunningTime="2025-12-08 19:31:04.9259281 +0000 UTC m=+121.696418374" Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.952888 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-75h8s" podStartSLOduration=101.952870332 podStartE2EDuration="1m41.952870332s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:04.950369955 +0000 UTC m=+121.720860229" watchObservedRunningTime="2025-12-08 19:31:04.952870332 +0000 UTC m=+121.723360606" Dec 08 19:31:04 crc kubenswrapper[5125]: I1208 19:31:04.967288 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-m22vv" podStartSLOduration=101.967267368 podStartE2EDuration="1m41.967267368s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:04.967045902 +0000 UTC m=+121.737536196" watchObservedRunningTime="2025-12-08 19:31:04.967267368 +0000 UTC m=+121.737757642" Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.000296 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:05 crc kubenswrapper[5125]: E1208 19:31:05.000592 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:05.500580842 +0000 UTC m=+122.271071116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.006349 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-2rxc6" podStartSLOduration=8.006330326 podStartE2EDuration="8.006330326s" podCreationTimestamp="2025-12-08 19:30:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:05.005275238 +0000 UTC m=+121.775765522" watchObservedRunningTime="2025-12-08 19:31:05.006330326 +0000 UTC m=+121.776820600" Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.024938 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-t8fbs" podStartSLOduration=102.024920185 podStartE2EDuration="1m42.024920185s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:05.024852653 +0000 UTC m=+121.795342927" watchObservedRunningTime="2025-12-08 19:31:05.024920185 +0000 UTC m=+121.795410459" Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.045085 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8xhhx" podStartSLOduration=102.045070645 podStartE2EDuration="1m42.045070645s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:05.042160627 +0000 UTC m=+121.812650901" watchObservedRunningTime="2025-12-08 19:31:05.045070645 +0000 UTC m=+121.815560909" Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.079005 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-gsksl" podStartSLOduration=102.078986914 podStartE2EDuration="1m42.078986914s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:05.078874821 +0000 UTC m=+121.849365105" watchObservedRunningTime="2025-12-08 19:31:05.078986914 +0000 UTC m=+121.849477188" Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.101181 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:05 crc kubenswrapper[5125]: E1208 19:31:05.101529 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:05.601510278 +0000 UTC m=+122.372000552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.107000 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-ls2zg" podStartSLOduration=102.106981015 podStartE2EDuration="1m42.106981015s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:05.104881459 +0000 UTC m=+121.875371733" watchObservedRunningTime="2025-12-08 19:31:05.106981015 +0000 UTC m=+121.877471289" Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.129735 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-x7zl6" podStartSLOduration=9.129718616 podStartE2EDuration="9.129718616s" podCreationTimestamp="2025-12-08 19:30:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:05.127223129 +0000 UTC m=+121.897713403" watchObservedRunningTime="2025-12-08 19:31:05.129718616 +0000 UTC m=+121.900208890" Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.144290 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-ghhd8" podStartSLOduration=9.144275206 podStartE2EDuration="9.144275206s" podCreationTimestamp="2025-12-08 19:30:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:05.14331286 +0000 UTC m=+121.913803134" watchObservedRunningTime="2025-12-08 19:31:05.144275206 +0000 UTC m=+121.914765480" Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.212052 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:05 crc kubenswrapper[5125]: E1208 19:31:05.212899 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:05.712867335 +0000 UTC m=+122.483357609 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.248545 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sbhlq" podStartSLOduration=102.248530642 podStartE2EDuration="1m42.248530642s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:05.247461164 +0000 UTC m=+122.017951458" watchObservedRunningTime="2025-12-08 19:31:05.248530642 +0000 UTC m=+122.019020906" Dec 08 19:31:05 crc kubenswrapper[5125]: E1208 19:31:05.314170 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:05.814150732 +0000 UTC m=+122.584641006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.314198 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.314488 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:05 crc kubenswrapper[5125]: E1208 19:31:05.314792 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:05.814785409 +0000 UTC m=+122.585275683 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.415727 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:05 crc kubenswrapper[5125]: E1208 19:31:05.415880 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:05.915842989 +0000 UTC m=+122.686333273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.416135 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:05 crc kubenswrapper[5125]: E1208 19:31:05.416489 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:05.916475246 +0000 UTC m=+122.686965520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.517407 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:05 crc kubenswrapper[5125]: E1208 19:31:05.517595 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:06.017569068 +0000 UTC m=+122.788059332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.518128 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:05 crc kubenswrapper[5125]: E1208 19:31:05.518405 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:06.01839226 +0000 UTC m=+122.788882524 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.530710 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rd56f" event={"ID":"d09591c3-30e3-44ab-88d3-91833456f731","Type":"ContainerStarted","Data":"ac591fe989260e426b79d8e45b01357bc075fe75d14d5e4b367e595e1f6c6696"} Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.530752 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rd56f" event={"ID":"d09591c3-30e3-44ab-88d3-91833456f731","Type":"ContainerStarted","Data":"4b565e207d39e657eddcce7eeeda6a827da14afdd26822cb81eff76bbe31ff65"} Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.547636 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-h2kxl" event={"ID":"1c52381c-f0cb-4cf1-992d-60d930ba7d00","Type":"ContainerStarted","Data":"0fba685e6ed4188c99c1ef4ba5aef57266a014bcc1d5e0d5ac8db2e7328d53e6"} Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.566848 5125 generic.go:358] "Generic (PLEG): container finished" podID="f078a28d-3d9d-41a2-b283-7c1f76ebbfc9" containerID="c7a68036df9b7994dad4b5e4a0b4806c4657dd4b88c4122f7af53aaa3d04ca79" exitCode=0 Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.566912 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-b586h" event={"ID":"f078a28d-3d9d-41a2-b283-7c1f76ebbfc9","Type":"ContainerDied","Data":"c7a68036df9b7994dad4b5e4a0b4806c4657dd4b88c4122f7af53aaa3d04ca79"} Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.571282 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rd56f" podStartSLOduration=102.571260418 podStartE2EDuration="1m42.571260418s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:05.566737567 +0000 UTC m=+122.337227851" watchObservedRunningTime="2025-12-08 19:31:05.571260418 +0000 UTC m=+122.341750692" Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.573541 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-44sgb" event={"ID":"dbdef7b8-f28d-4bb3-aff4-33b97ff1e415","Type":"ContainerStarted","Data":"7da80d5713ee94e3cc7b8f60b8a37d4cc507a558d430ce5bfc99ab860dc32d87"} Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.573579 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-44sgb" event={"ID":"dbdef7b8-f28d-4bb3-aff4-33b97ff1e415","Type":"ContainerStarted","Data":"cf29a43e2b5ea9b2ffea6a7c9987c598fc8505f51dc9eae3e55b18216b3d1079"} Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.579886 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bhnwz" event={"ID":"2eb0798c-5c61-48ac-bf09-2f21642e8e53","Type":"ContainerStarted","Data":"68a6748915d689d81f1281dcdb52d7b140698149b1767866afe275acd6979a6f"} Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.580100 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-bhnwz" Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.582249 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4jn6q" event={"ID":"839fbd69-7068-47fa-94aa-8af954d8cbc9","Type":"ContainerStarted","Data":"c3d1e38919827133e79fa051aa47726756125dd712693402a94a03d5f6e96c9a"} Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.583983 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.585767 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"367e85a4fdaaf613020dc8e54f3690d4f81d5320b750fbfa1d704a7b7a9e71cb"} Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.586259 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.595398 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-ls2zg" event={"ID":"8e26b397-18c5-4b0e-a483-943460e35c11","Type":"ContainerStarted","Data":"3028b8304a6c72e000a47a89b33cb9d322bd460ecb66f089c566e7b9e5eb0c19"} Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.596597 5125 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-75h8s container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.596666 5125 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-75h8s" podUID="77083b49-6a76-42e1-9f35-4b34306c23d3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.599020 5125 patch_prober.go:28] interesting pod/downloads-747b44746d-t8fbs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.599070 5125 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-t8fbs" podUID="c46131b3-44f8-4a83-a357-31ca0197d1be" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.609778 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jskvf" Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.619853 5125 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kr9dh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 19:31:05 crc kubenswrapper[5125]: [-]has-synced failed: reason withheld Dec 08 19:31:05 crc kubenswrapper[5125]: [+]process-running ok Dec 08 19:31:05 crc kubenswrapper[5125]: healthz check failed Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.619925 5125 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kr9dh" podUID="3d5b91de-c016-4a44-aab6-910f036d51ae" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.620273 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:05 crc kubenswrapper[5125]: E1208 19:31:05.621548 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:06.121526916 +0000 UTC m=+122.892017190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.691321 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-bhnwz" podStartSLOduration=9.691304768 podStartE2EDuration="9.691304768s" podCreationTimestamp="2025-12-08 19:30:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:05.689854729 +0000 UTC m=+122.460345023" watchObservedRunningTime="2025-12-08 19:31:05.691304768 +0000 UTC m=+122.461795042" Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.723643 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:05 crc kubenswrapper[5125]: E1208 19:31:05.726708 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:06.226692536 +0000 UTC m=+122.997182800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.743790 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=45.743774714 podStartE2EDuration="45.743774714s" podCreationTimestamp="2025-12-08 19:30:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:05.719625587 +0000 UTC m=+122.490115881" watchObservedRunningTime="2025-12-08 19:31:05.743774714 +0000 UTC m=+122.514264988" Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.825391 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:05 crc kubenswrapper[5125]: E1208 19:31:05.825588 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:06.325557037 +0000 UTC m=+123.096047311 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.826283 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:05 crc kubenswrapper[5125]: E1208 19:31:05.826851 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:06.326828622 +0000 UTC m=+123.097318896 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.900869 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-44sgb" podStartSLOduration=102.900838717 podStartE2EDuration="1m42.900838717s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:05.744031781 +0000 UTC m=+122.514522065" watchObservedRunningTime="2025-12-08 19:31:05.900838717 +0000 UTC m=+122.671328991" Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.903303 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gs6mc"] Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.906855 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gs6mc" Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.912424 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.927966 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:05 crc kubenswrapper[5125]: E1208 19:31:05.928490 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:06.428471918 +0000 UTC m=+123.198962192 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:05 crc kubenswrapper[5125]: I1208 19:31:05.950010 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gs6mc"] Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.029322 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e9aba28-961e-4643-92d8-d718748862c6-catalog-content\") pod \"certified-operators-gs6mc\" (UID: \"9e9aba28-961e-4643-92d8-d718748862c6\") " pod="openshift-marketplace/certified-operators-gs6mc" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.029370 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e9aba28-961e-4643-92d8-d718748862c6-utilities\") pod \"certified-operators-gs6mc\" (UID: \"9e9aba28-961e-4643-92d8-d718748862c6\") " pod="openshift-marketplace/certified-operators-gs6mc" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.029684 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.029777 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8vrv\" (UniqueName: \"kubernetes.io/projected/9e9aba28-961e-4643-92d8-d718748862c6-kube-api-access-q8vrv\") pod \"certified-operators-gs6mc\" (UID: \"9e9aba28-961e-4643-92d8-d718748862c6\") " pod="openshift-marketplace/certified-operators-gs6mc" Dec 08 19:31:06 crc kubenswrapper[5125]: E1208 19:31:06.030148 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:06.530132625 +0000 UTC m=+123.300622909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.080380 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c5dng"] Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.084942 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c5dng" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.090415 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.090722 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c5dng"] Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.121876 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-x7zl6"] Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.131197 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:06 crc kubenswrapper[5125]: E1208 19:31:06.131364 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:06.631336339 +0000 UTC m=+123.401826613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.131659 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q8vrv\" (UniqueName: \"kubernetes.io/projected/9e9aba28-961e-4643-92d8-d718748862c6-kube-api-access-q8vrv\") pod \"certified-operators-gs6mc\" (UID: \"9e9aba28-961e-4643-92d8-d718748862c6\") " pod="openshift-marketplace/certified-operators-gs6mc" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.131862 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e9aba28-961e-4643-92d8-d718748862c6-catalog-content\") pod \"certified-operators-gs6mc\" (UID: \"9e9aba28-961e-4643-92d8-d718748862c6\") " pod="openshift-marketplace/certified-operators-gs6mc" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.131912 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e9aba28-961e-4643-92d8-d718748862c6-utilities\") pod \"certified-operators-gs6mc\" (UID: \"9e9aba28-961e-4643-92d8-d718748862c6\") " pod="openshift-marketplace/certified-operators-gs6mc" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.132164 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.132334 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e9aba28-961e-4643-92d8-d718748862c6-utilities\") pod \"certified-operators-gs6mc\" (UID: \"9e9aba28-961e-4643-92d8-d718748862c6\") " pod="openshift-marketplace/certified-operators-gs6mc" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.132417 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e9aba28-961e-4643-92d8-d718748862c6-catalog-content\") pod \"certified-operators-gs6mc\" (UID: \"9e9aba28-961e-4643-92d8-d718748862c6\") " pod="openshift-marketplace/certified-operators-gs6mc" Dec 08 19:31:06 crc kubenswrapper[5125]: E1208 19:31:06.132569 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:06.632551772 +0000 UTC m=+123.403042046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.167635 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8vrv\" (UniqueName: \"kubernetes.io/projected/9e9aba28-961e-4643-92d8-d718748862c6-kube-api-access-q8vrv\") pod \"certified-operators-gs6mc\" (UID: \"9e9aba28-961e-4643-92d8-d718748862c6\") " pod="openshift-marketplace/certified-operators-gs6mc" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.226550 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gs6mc" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.232802 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:06 crc kubenswrapper[5125]: E1208 19:31:06.232992 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:06.732962674 +0000 UTC m=+123.503452948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.233150 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edf1ad5e-15fa-4885-be31-4124514570a1-catalog-content\") pod \"community-operators-c5dng\" (UID: \"edf1ad5e-15fa-4885-be31-4124514570a1\") " pod="openshift-marketplace/community-operators-c5dng" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.233315 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rflnr\" (UniqueName: \"kubernetes.io/projected/edf1ad5e-15fa-4885-be31-4124514570a1-kube-api-access-rflnr\") pod \"community-operators-c5dng\" (UID: \"edf1ad5e-15fa-4885-be31-4124514570a1\") " pod="openshift-marketplace/community-operators-c5dng" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.233359 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.233595 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edf1ad5e-15fa-4885-be31-4124514570a1-utilities\") pod \"community-operators-c5dng\" (UID: \"edf1ad5e-15fa-4885-be31-4124514570a1\") " pod="openshift-marketplace/community-operators-c5dng" Dec 08 19:31:06 crc kubenswrapper[5125]: E1208 19:31:06.233658 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:06.733643373 +0000 UTC m=+123.504133647 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.280656 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hgkwp"] Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.289728 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hgkwp" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.295664 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hgkwp"] Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.319064 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-h2kxl" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.334497 5125 ???:1] "http: TLS handshake error from 192.168.126.11:34764: no serving certificate available for the kubelet" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.339530 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:06 crc kubenswrapper[5125]: E1208 19:31:06.339663 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:06.839638706 +0000 UTC m=+123.610128980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.339876 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edf1ad5e-15fa-4885-be31-4124514570a1-catalog-content\") pod \"community-operators-c5dng\" (UID: \"edf1ad5e-15fa-4885-be31-4124514570a1\") " pod="openshift-marketplace/community-operators-c5dng" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.340392 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edf1ad5e-15fa-4885-be31-4124514570a1-catalog-content\") pod \"community-operators-c5dng\" (UID: \"edf1ad5e-15fa-4885-be31-4124514570a1\") " pod="openshift-marketplace/community-operators-c5dng" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.340510 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rflnr\" (UniqueName: \"kubernetes.io/projected/edf1ad5e-15fa-4885-be31-4124514570a1-kube-api-access-rflnr\") pod \"community-operators-c5dng\" (UID: \"edf1ad5e-15fa-4885-be31-4124514570a1\") " pod="openshift-marketplace/community-operators-c5dng" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.340533 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:06 crc kubenswrapper[5125]: E1208 19:31:06.342159 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:06.842149202 +0000 UTC m=+123.612639476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.342417 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edf1ad5e-15fa-4885-be31-4124514570a1-utilities\") pod \"community-operators-c5dng\" (UID: \"edf1ad5e-15fa-4885-be31-4124514570a1\") " pod="openshift-marketplace/community-operators-c5dng" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.342714 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edf1ad5e-15fa-4885-be31-4124514570a1-utilities\") pod \"community-operators-c5dng\" (UID: \"edf1ad5e-15fa-4885-be31-4124514570a1\") " pod="openshift-marketplace/community-operators-c5dng" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.387581 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rflnr\" (UniqueName: \"kubernetes.io/projected/edf1ad5e-15fa-4885-be31-4124514570a1-kube-api-access-rflnr\") pod \"community-operators-c5dng\" (UID: \"edf1ad5e-15fa-4885-be31-4124514570a1\") " pod="openshift-marketplace/community-operators-c5dng" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.403060 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c5dng" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.444441 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:06 crc kubenswrapper[5125]: E1208 19:31:06.444939 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:06.944916589 +0000 UTC m=+123.715406863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.445517 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e29013b4-d624-4a56-804d-c5bf83a0db40-utilities\") pod \"certified-operators-hgkwp\" (UID: \"e29013b4-d624-4a56-804d-c5bf83a0db40\") " pod="openshift-marketplace/certified-operators-hgkwp" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.445659 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.445725 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e29013b4-d624-4a56-804d-c5bf83a0db40-catalog-content\") pod \"certified-operators-hgkwp\" (UID: \"e29013b4-d624-4a56-804d-c5bf83a0db40\") " pod="openshift-marketplace/certified-operators-hgkwp" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.445831 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nrsv\" (UniqueName: \"kubernetes.io/projected/e29013b4-d624-4a56-804d-c5bf83a0db40-kube-api-access-9nrsv\") pod \"certified-operators-hgkwp\" (UID: \"e29013b4-d624-4a56-804d-c5bf83a0db40\") " pod="openshift-marketplace/certified-operators-hgkwp" Dec 08 19:31:06 crc kubenswrapper[5125]: E1208 19:31:06.446291 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:06.946255065 +0000 UTC m=+123.716745349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.474568 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mspl5"] Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.489080 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mspl5" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.505411 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mspl5"] Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.555753 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.556119 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9nrsv\" (UniqueName: \"kubernetes.io/projected/e29013b4-d624-4a56-804d-c5bf83a0db40-kube-api-access-9nrsv\") pod \"certified-operators-hgkwp\" (UID: \"e29013b4-d624-4a56-804d-c5bf83a0db40\") " pod="openshift-marketplace/certified-operators-hgkwp" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.556237 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e29013b4-d624-4a56-804d-c5bf83a0db40-utilities\") pod \"certified-operators-hgkwp\" (UID: \"e29013b4-d624-4a56-804d-c5bf83a0db40\") " pod="openshift-marketplace/certified-operators-hgkwp" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.556275 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e29013b4-d624-4a56-804d-c5bf83a0db40-catalog-content\") pod \"certified-operators-hgkwp\" (UID: \"e29013b4-d624-4a56-804d-c5bf83a0db40\") " pod="openshift-marketplace/certified-operators-hgkwp" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.558715 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e29013b4-d624-4a56-804d-c5bf83a0db40-catalog-content\") pod \"certified-operators-hgkwp\" (UID: \"e29013b4-d624-4a56-804d-c5bf83a0db40\") " pod="openshift-marketplace/certified-operators-hgkwp" Dec 08 19:31:06 crc kubenswrapper[5125]: E1208 19:31:06.558821 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:07.058789613 +0000 UTC m=+123.829279887 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.559761 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e29013b4-d624-4a56-804d-c5bf83a0db40-utilities\") pod \"certified-operators-hgkwp\" (UID: \"e29013b4-d624-4a56-804d-c5bf83a0db40\") " pod="openshift-marketplace/certified-operators-hgkwp" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.607931 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nrsv\" (UniqueName: \"kubernetes.io/projected/e29013b4-d624-4a56-804d-c5bf83a0db40-kube-api-access-9nrsv\") pod \"certified-operators-hgkwp\" (UID: \"e29013b4-d624-4a56-804d-c5bf83a0db40\") " pod="openshift-marketplace/certified-operators-hgkwp" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.618874 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hgkwp" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.626566 5125 patch_prober.go:28] interesting pod/downloads-747b44746d-t8fbs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.626988 5125 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-t8fbs" podUID="c46131b3-44f8-4a83-a357-31ca0197d1be" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.642515 5125 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kr9dh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 19:31:06 crc kubenswrapper[5125]: [-]has-synced failed: reason withheld Dec 08 19:31:06 crc kubenswrapper[5125]: [+]process-running ok Dec 08 19:31:06 crc kubenswrapper[5125]: healthz check failed Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.642597 5125 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kr9dh" podUID="3d5b91de-c016-4a44-aab6-910f036d51ae" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.646327 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gs6mc"] Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.661660 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d85490a5-7e2e-41c2-8a79-fdfbe3767877-catalog-content\") pod \"community-operators-mspl5\" (UID: \"d85490a5-7e2e-41c2-8a79-fdfbe3767877\") " pod="openshift-marketplace/community-operators-mspl5" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.661737 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d85490a5-7e2e-41c2-8a79-fdfbe3767877-utilities\") pod \"community-operators-mspl5\" (UID: \"d85490a5-7e2e-41c2-8a79-fdfbe3767877\") " pod="openshift-marketplace/community-operators-mspl5" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.661786 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.661816 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcg5j\" (UniqueName: \"kubernetes.io/projected/d85490a5-7e2e-41c2-8a79-fdfbe3767877-kube-api-access-dcg5j\") pod \"community-operators-mspl5\" (UID: \"d85490a5-7e2e-41c2-8a79-fdfbe3767877\") " pod="openshift-marketplace/community-operators-mspl5" Dec 08 19:31:06 crc kubenswrapper[5125]: E1208 19:31:06.662134 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:07.162117275 +0000 UTC m=+123.932607549 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.764962 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.765404 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dcg5j\" (UniqueName: \"kubernetes.io/projected/d85490a5-7e2e-41c2-8a79-fdfbe3767877-kube-api-access-dcg5j\") pod \"community-operators-mspl5\" (UID: \"d85490a5-7e2e-41c2-8a79-fdfbe3767877\") " pod="openshift-marketplace/community-operators-mspl5" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.765723 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d85490a5-7e2e-41c2-8a79-fdfbe3767877-catalog-content\") pod \"community-operators-mspl5\" (UID: \"d85490a5-7e2e-41c2-8a79-fdfbe3767877\") " pod="openshift-marketplace/community-operators-mspl5" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.765876 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d85490a5-7e2e-41c2-8a79-fdfbe3767877-utilities\") pod \"community-operators-mspl5\" (UID: \"d85490a5-7e2e-41c2-8a79-fdfbe3767877\") " pod="openshift-marketplace/community-operators-mspl5" Dec 08 19:31:06 crc kubenswrapper[5125]: E1208 19:31:06.766810 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:07.266784522 +0000 UTC m=+124.037274796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.777730 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d85490a5-7e2e-41c2-8a79-fdfbe3767877-catalog-content\") pod \"community-operators-mspl5\" (UID: \"d85490a5-7e2e-41c2-8a79-fdfbe3767877\") " pod="openshift-marketplace/community-operators-mspl5" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.778702 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d85490a5-7e2e-41c2-8a79-fdfbe3767877-utilities\") pod \"community-operators-mspl5\" (UID: \"d85490a5-7e2e-41c2-8a79-fdfbe3767877\") " pod="openshift-marketplace/community-operators-mspl5" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.795029 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcg5j\" (UniqueName: \"kubernetes.io/projected/d85490a5-7e2e-41c2-8a79-fdfbe3767877-kube-api-access-dcg5j\") pod \"community-operators-mspl5\" (UID: \"d85490a5-7e2e-41c2-8a79-fdfbe3767877\") " pod="openshift-marketplace/community-operators-mspl5" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.856088 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mspl5" Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.866978 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:06 crc kubenswrapper[5125]: E1208 19:31:06.867516 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:07.367503563 +0000 UTC m=+124.137993837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.968427 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:06 crc kubenswrapper[5125]: E1208 19:31:06.968805 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:07.468751279 +0000 UTC m=+124.239241553 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:06 crc kubenswrapper[5125]: I1208 19:31:06.969351 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:06 crc kubenswrapper[5125]: E1208 19:31:06.969693 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:07.469674954 +0000 UTC m=+124.240165228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.003155 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c5dng"] Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.045333 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-b586h" Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.071523 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:07 crc kubenswrapper[5125]: E1208 19:31:07.071846 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:07.571814763 +0000 UTC m=+124.342305037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.072008 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:07 crc kubenswrapper[5125]: E1208 19:31:07.072416 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:07.572408489 +0000 UTC m=+124.342898763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.097395 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hgkwp"] Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.174096 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5lb9\" (UniqueName: \"kubernetes.io/projected/f078a28d-3d9d-41a2-b283-7c1f76ebbfc9-kube-api-access-q5lb9\") pod \"f078a28d-3d9d-41a2-b283-7c1f76ebbfc9\" (UID: \"f078a28d-3d9d-41a2-b283-7c1f76ebbfc9\") " Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.174431 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f078a28d-3d9d-41a2-b283-7c1f76ebbfc9-config-volume\") pod \"f078a28d-3d9d-41a2-b283-7c1f76ebbfc9\" (UID: \"f078a28d-3d9d-41a2-b283-7c1f76ebbfc9\") " Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.174547 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f078a28d-3d9d-41a2-b283-7c1f76ebbfc9-secret-volume\") pod \"f078a28d-3d9d-41a2-b283-7c1f76ebbfc9\" (UID: \"f078a28d-3d9d-41a2-b283-7c1f76ebbfc9\") " Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.174814 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:07 crc kubenswrapper[5125]: E1208 19:31:07.175265 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:07.675245776 +0000 UTC m=+124.445736050 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.177138 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f078a28d-3d9d-41a2-b283-7c1f76ebbfc9-config-volume" (OuterVolumeSpecName: "config-volume") pod "f078a28d-3d9d-41a2-b283-7c1f76ebbfc9" (UID: "f078a28d-3d9d-41a2-b283-7c1f76ebbfc9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.184498 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f078a28d-3d9d-41a2-b283-7c1f76ebbfc9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f078a28d-3d9d-41a2-b283-7c1f76ebbfc9" (UID: "f078a28d-3d9d-41a2-b283-7c1f76ebbfc9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.194078 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f078a28d-3d9d-41a2-b283-7c1f76ebbfc9-kube-api-access-q5lb9" (OuterVolumeSpecName: "kube-api-access-q5lb9") pod "f078a28d-3d9d-41a2-b283-7c1f76ebbfc9" (UID: "f078a28d-3d9d-41a2-b283-7c1f76ebbfc9"). InnerVolumeSpecName "kube-api-access-q5lb9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.262898 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.263408 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f078a28d-3d9d-41a2-b283-7c1f76ebbfc9" containerName="collect-profiles" Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.263424 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="f078a28d-3d9d-41a2-b283-7c1f76ebbfc9" containerName="collect-profiles" Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.263516 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="f078a28d-3d9d-41a2-b283-7c1f76ebbfc9" containerName="collect-profiles" Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.276120 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:07 crc kubenswrapper[5125]: E1208 19:31:07.276481 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:07.776464892 +0000 UTC m=+124.546955166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.276521 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q5lb9\" (UniqueName: \"kubernetes.io/projected/f078a28d-3d9d-41a2-b283-7c1f76ebbfc9-kube-api-access-q5lb9\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.276535 5125 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f078a28d-3d9d-41a2-b283-7c1f76ebbfc9-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.276545 5125 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f078a28d-3d9d-41a2-b283-7c1f76ebbfc9-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:07 crc kubenswrapper[5125]: W1208 19:31:07.292443 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd85490a5_7e2e_41c2_8a79_fdfbe3767877.slice/crio-f5f6c3de587bcbaffbe3b613e6ce70268b35fcf07f32f162be4e066ba58054a7 WatchSource:0}: Error finding container f5f6c3de587bcbaffbe3b613e6ce70268b35fcf07f32f162be4e066ba58054a7: Status 404 returned error can't find the container with id f5f6c3de587bcbaffbe3b613e6ce70268b35fcf07f32f162be4e066ba58054a7 Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.377140 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:07 crc kubenswrapper[5125]: E1208 19:31:07.377446 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:07.877412139 +0000 UTC m=+124.647902413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.378106 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:07 crc kubenswrapper[5125]: E1208 19:31:07.378568 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:07.87855235 +0000 UTC m=+124.649042624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.404892 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.404931 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mspl5"] Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.405048 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.412154 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.412327 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.479100 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:07 crc kubenswrapper[5125]: E1208 19:31:07.479201 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:07.979179999 +0000 UTC m=+124.749670273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.479408 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:07 crc kubenswrapper[5125]: E1208 19:31:07.479875 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:07.979866767 +0000 UTC m=+124.750357041 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.580330 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:07 crc kubenswrapper[5125]: E1208 19:31:07.580848 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:08.080819074 +0000 UTC m=+124.851309348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.581230 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a439f4a-1b17-4e9d-a90c-c9278ed75bae-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"7a439f4a-1b17-4e9d-a90c-c9278ed75bae\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.581363 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a439f4a-1b17-4e9d-a90c-c9278ed75bae-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"7a439f4a-1b17-4e9d-a90c-c9278ed75bae\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.581765 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:07 crc kubenswrapper[5125]: E1208 19:31:07.582118 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:08.082104789 +0000 UTC m=+124.852595063 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.625917 5125 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kr9dh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 19:31:07 crc kubenswrapper[5125]: [-]has-synced failed: reason withheld Dec 08 19:31:07 crc kubenswrapper[5125]: [+]process-running ok Dec 08 19:31:07 crc kubenswrapper[5125]: healthz check failed Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.625988 5125 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kr9dh" podUID="3d5b91de-c016-4a44-aab6-910f036d51ae" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.637434 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-b586h" event={"ID":"f078a28d-3d9d-41a2-b283-7c1f76ebbfc9","Type":"ContainerDied","Data":"63e85d506157742700fdf95e4223aec2234978285760745f4ce5d8ba8fd3fb4f"} Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.637489 5125 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63e85d506157742700fdf95e4223aec2234978285760745f4ce5d8ba8fd3fb4f" Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.637533 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420370-b586h" Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.644478 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jrtpt" event={"ID":"c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1","Type":"ContainerStarted","Data":"5a0fb989d6ea8f1a7e4e8aba17bd2ae6c884a73f99b2a7fec4c8a41e1eb02166"} Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.651050 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mspl5" event={"ID":"d85490a5-7e2e-41c2-8a79-fdfbe3767877","Type":"ContainerStarted","Data":"f5f6c3de587bcbaffbe3b613e6ce70268b35fcf07f32f162be4e066ba58054a7"} Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.666637 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gs6mc" event={"ID":"9e9aba28-961e-4643-92d8-d718748862c6","Type":"ContainerStarted","Data":"43c498fba30fa51637dd7805a839eac4cad54e53e1b9bf6142bf6496e135824b"} Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.669918 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgkwp" event={"ID":"e29013b4-d624-4a56-804d-c5bf83a0db40","Type":"ContainerStarted","Data":"2fac44e32fafd8e36e0f913fa2e04e05ce7e2d4a76298d4523c35a2b96214661"} Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.675491 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5dng" event={"ID":"edf1ad5e-15fa-4885-be31-4124514570a1","Type":"ContainerStarted","Data":"95c0843581cabb481674723fc11704b09c4375695b040b97e6fd01d6c109619a"} Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.675736 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-x7zl6" podUID="7d3d93c9-073e-4463-ad22-0dc846df2d84" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://b94019e1e3e8ae908735166d6af528e641dd7f24f090a51a6b8577a6548c2ff9" gracePeriod=30 Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.683560 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:07 crc kubenswrapper[5125]: E1208 19:31:07.683800 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:08.183769206 +0000 UTC m=+124.954259490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.684311 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.684354 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a439f4a-1b17-4e9d-a90c-c9278ed75bae-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"7a439f4a-1b17-4e9d-a90c-c9278ed75bae\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.684425 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a439f4a-1b17-4e9d-a90c-c9278ed75bae-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"7a439f4a-1b17-4e9d-a90c-c9278ed75bae\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.684502 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a439f4a-1b17-4e9d-a90c-c9278ed75bae-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"7a439f4a-1b17-4e9d-a90c-c9278ed75bae\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 19:31:07 crc kubenswrapper[5125]: E1208 19:31:07.684749 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:08.184732251 +0000 UTC m=+124.955222525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.712268 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a439f4a-1b17-4e9d-a90c-c9278ed75bae-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"7a439f4a-1b17-4e9d-a90c-c9278ed75bae\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.726854 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.785915 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:07 crc kubenswrapper[5125]: E1208 19:31:07.786443 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:08.286384528 +0000 UTC m=+125.056874802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.787118 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:07 crc kubenswrapper[5125]: E1208 19:31:07.788840 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:08.288820473 +0000 UTC m=+125.059310747 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.890006 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:07 crc kubenswrapper[5125]: E1208 19:31:07.892240 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:08.392205696 +0000 UTC m=+125.162695970 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.892415 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:07 crc kubenswrapper[5125]: E1208 19:31:07.892898 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:08.392872404 +0000 UTC m=+125.163362678 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:07 crc kubenswrapper[5125]: I1208 19:31:07.935165 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 08 19:31:07 crc kubenswrapper[5125]: W1208 19:31:07.945816 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7a439f4a_1b17_4e9d_a90c_c9278ed75bae.slice/crio-b0094cf7f14d83478f1355ea4d8da695c70f73fd1bfdb60598a6a989edf7faec WatchSource:0}: Error finding container b0094cf7f14d83478f1355ea4d8da695c70f73fd1bfdb60598a6a989edf7faec: Status 404 returned error can't find the container with id b0094cf7f14d83478f1355ea4d8da695c70f73fd1bfdb60598a6a989edf7faec Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:07.994569 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:08 crc kubenswrapper[5125]: E1208 19:31:07.994705 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:08.494686345 +0000 UTC m=+125.265176619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:07.994871 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:08 crc kubenswrapper[5125]: E1208 19:31:07.995208 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:08.495200788 +0000 UTC m=+125.265691062 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.068857 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cnqn9"] Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.074860 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cnqn9" Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.081629 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnqn9"] Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.086824 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.096170 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:08 crc kubenswrapper[5125]: E1208 19:31:08.096593 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:08.596574357 +0000 UTC m=+125.367064631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.198312 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9mss\" (UniqueName: \"kubernetes.io/projected/250d3433-c9c9-4cc2-b0ff-fae4f22615b3-kube-api-access-b9mss\") pod \"redhat-marketplace-cnqn9\" (UID: \"250d3433-c9c9-4cc2-b0ff-fae4f22615b3\") " pod="openshift-marketplace/redhat-marketplace-cnqn9" Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.198573 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/250d3433-c9c9-4cc2-b0ff-fae4f22615b3-catalog-content\") pod \"redhat-marketplace-cnqn9\" (UID: \"250d3433-c9c9-4cc2-b0ff-fae4f22615b3\") " pod="openshift-marketplace/redhat-marketplace-cnqn9" Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.198693 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:08 crc kubenswrapper[5125]: E1208 19:31:08.199024 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:08.699008585 +0000 UTC m=+125.469498859 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.199521 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/250d3433-c9c9-4cc2-b0ff-fae4f22615b3-utilities\") pod \"redhat-marketplace-cnqn9\" (UID: \"250d3433-c9c9-4cc2-b0ff-fae4f22615b3\") " pod="openshift-marketplace/redhat-marketplace-cnqn9" Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.275750 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.280153 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-v5nx6" Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.300684 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:08 crc kubenswrapper[5125]: E1208 19:31:08.300774 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:08.800751553 +0000 UTC m=+125.571241827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.300848 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b9mss\" (UniqueName: \"kubernetes.io/projected/250d3433-c9c9-4cc2-b0ff-fae4f22615b3-kube-api-access-b9mss\") pod \"redhat-marketplace-cnqn9\" (UID: \"250d3433-c9c9-4cc2-b0ff-fae4f22615b3\") " pod="openshift-marketplace/redhat-marketplace-cnqn9" Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.300967 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/250d3433-c9c9-4cc2-b0ff-fae4f22615b3-catalog-content\") pod \"redhat-marketplace-cnqn9\" (UID: \"250d3433-c9c9-4cc2-b0ff-fae4f22615b3\") " pod="openshift-marketplace/redhat-marketplace-cnqn9" Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.301044 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.301421 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/250d3433-c9c9-4cc2-b0ff-fae4f22615b3-utilities\") pod \"redhat-marketplace-cnqn9\" (UID: \"250d3433-c9c9-4cc2-b0ff-fae4f22615b3\") " pod="openshift-marketplace/redhat-marketplace-cnqn9" Dec 08 19:31:08 crc kubenswrapper[5125]: E1208 19:31:08.301676 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:08.801658957 +0000 UTC m=+125.572149231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.301683 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/250d3433-c9c9-4cc2-b0ff-fae4f22615b3-catalog-content\") pod \"redhat-marketplace-cnqn9\" (UID: \"250d3433-c9c9-4cc2-b0ff-fae4f22615b3\") " pod="openshift-marketplace/redhat-marketplace-cnqn9" Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.301781 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/250d3433-c9c9-4cc2-b0ff-fae4f22615b3-utilities\") pod \"redhat-marketplace-cnqn9\" (UID: \"250d3433-c9c9-4cc2-b0ff-fae4f22615b3\") " pod="openshift-marketplace/redhat-marketplace-cnqn9" Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.336954 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9mss\" (UniqueName: \"kubernetes.io/projected/250d3433-c9c9-4cc2-b0ff-fae4f22615b3-kube-api-access-b9mss\") pod \"redhat-marketplace-cnqn9\" (UID: \"250d3433-c9c9-4cc2-b0ff-fae4f22615b3\") " pod="openshift-marketplace/redhat-marketplace-cnqn9" Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.397538 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cnqn9" Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.403261 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:08 crc kubenswrapper[5125]: E1208 19:31:08.404896 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:08.904865036 +0000 UTC m=+125.675355310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.492663 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-cdw7h" Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.492717 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-cdw7h" Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.494330 5125 patch_prober.go:28] interesting pod/console-64d44f6ddf-cdw7h container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.494393 5125 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-cdw7h" podUID="92837ccf-1e39-495e-bbcb-d3eaafd95d15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.499399 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hr84x"] Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.506378 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:08 crc kubenswrapper[5125]: E1208 19:31:08.506799 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:09.006786549 +0000 UTC m=+125.777276823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.607750 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:08 crc kubenswrapper[5125]: E1208 19:31:08.608127 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:09.107997234 +0000 UTC m=+125.878487508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.614958 5125 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-kr9dh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 19:31:08 crc kubenswrapper[5125]: [-]has-synced failed: reason withheld Dec 08 19:31:08 crc kubenswrapper[5125]: [+]process-running ok Dec 08 19:31:08 crc kubenswrapper[5125]: healthz check failed Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.615043 5125 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-kr9dh" podUID="3d5b91de-c016-4a44-aab6-910f036d51ae" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.683109 5125 generic.go:358] "Generic (PLEG): container finished" podID="d85490a5-7e2e-41c2-8a79-fdfbe3767877" containerID="8c20860a44b04cc509f74d62b5a7d6b88ea7fe00c2eddf8cfc763d0532be54cc" exitCode=0 Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.686000 5125 generic.go:358] "Generic (PLEG): container finished" podID="9e9aba28-961e-4643-92d8-d718748862c6" containerID="6a2d566268cd4f2fc1723697db7cf9bbca185afd1cfc85428bcdc1ac2768e1e6" exitCode=0 Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.687260 5125 generic.go:358] "Generic (PLEG): container finished" podID="e29013b4-d624-4a56-804d-c5bf83a0db40" containerID="2adb90be8b3ca0d06e4d770c00339a23aad32939bc17c696741aa21d4cfd3245" exitCode=0 Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.689261 5125 generic.go:358] "Generic (PLEG): container finished" podID="edf1ad5e-15fa-4885-be31-4124514570a1" containerID="e11b6a4653389b10833c7862eee1a4e830b4e2b2b8da840604b4bb87d2963c37" exitCode=0 Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.711453 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:08 crc kubenswrapper[5125]: E1208 19:31:08.711774 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:09.211760937 +0000 UTC m=+125.982251211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.794083 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hr84x"] Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.794130 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mspl5" event={"ID":"d85490a5-7e2e-41c2-8a79-fdfbe3767877","Type":"ContainerDied","Data":"8c20860a44b04cc509f74d62b5a7d6b88ea7fe00c2eddf8cfc763d0532be54cc"} Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.794179 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"7a439f4a-1b17-4e9d-a90c-c9278ed75bae","Type":"ContainerStarted","Data":"b0094cf7f14d83478f1355ea4d8da695c70f73fd1bfdb60598a6a989edf7faec"} Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.794199 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gs6mc" event={"ID":"9e9aba28-961e-4643-92d8-d718748862c6","Type":"ContainerDied","Data":"6a2d566268cd4f2fc1723697db7cf9bbca185afd1cfc85428bcdc1ac2768e1e6"} Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.794215 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgkwp" event={"ID":"e29013b4-d624-4a56-804d-c5bf83a0db40","Type":"ContainerDied","Data":"2adb90be8b3ca0d06e4d770c00339a23aad32939bc17c696741aa21d4cfd3245"} Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.794232 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5dng" event={"ID":"edf1ad5e-15fa-4885-be31-4124514570a1","Type":"ContainerDied","Data":"e11b6a4653389b10833c7862eee1a4e830b4e2b2b8da840604b4bb87d2963c37"} Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.794380 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hr84x" Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.814051 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:08 crc kubenswrapper[5125]: E1208 19:31:08.815127 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:09.315104778 +0000 UTC m=+126.085595052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.815547 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:08 crc kubenswrapper[5125]: E1208 19:31:08.816601 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:09.316585118 +0000 UTC m=+126.087075492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.849938 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnqn9"] Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.917171 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.917401 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d657b632-26f0-4a12-8012-69b9adcdfb4d-catalog-content\") pod \"redhat-marketplace-hr84x\" (UID: \"d657b632-26f0-4a12-8012-69b9adcdfb4d\") " pod="openshift-marketplace/redhat-marketplace-hr84x" Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.917431 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d657b632-26f0-4a12-8012-69b9adcdfb4d-utilities\") pod \"redhat-marketplace-hr84x\" (UID: \"d657b632-26f0-4a12-8012-69b9adcdfb4d\") " pod="openshift-marketplace/redhat-marketplace-hr84x" Dec 08 19:31:08 crc kubenswrapper[5125]: I1208 19:31:08.917574 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq5d4\" (UniqueName: \"kubernetes.io/projected/d657b632-26f0-4a12-8012-69b9adcdfb4d-kube-api-access-xq5d4\") pod \"redhat-marketplace-hr84x\" (UID: \"d657b632-26f0-4a12-8012-69b9adcdfb4d\") " pod="openshift-marketplace/redhat-marketplace-hr84x" Dec 08 19:31:08 crc kubenswrapper[5125]: E1208 19:31:08.917689 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:09.417673589 +0000 UTC m=+126.188163863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.018730 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xq5d4\" (UniqueName: \"kubernetes.io/projected/d657b632-26f0-4a12-8012-69b9adcdfb4d-kube-api-access-xq5d4\") pod \"redhat-marketplace-hr84x\" (UID: \"d657b632-26f0-4a12-8012-69b9adcdfb4d\") " pod="openshift-marketplace/redhat-marketplace-hr84x" Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.019086 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d657b632-26f0-4a12-8012-69b9adcdfb4d-catalog-content\") pod \"redhat-marketplace-hr84x\" (UID: \"d657b632-26f0-4a12-8012-69b9adcdfb4d\") " pod="openshift-marketplace/redhat-marketplace-hr84x" Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.019123 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d657b632-26f0-4a12-8012-69b9adcdfb4d-utilities\") pod \"redhat-marketplace-hr84x\" (UID: \"d657b632-26f0-4a12-8012-69b9adcdfb4d\") " pod="openshift-marketplace/redhat-marketplace-hr84x" Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.019194 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.019526 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d657b632-26f0-4a12-8012-69b9adcdfb4d-catalog-content\") pod \"redhat-marketplace-hr84x\" (UID: \"d657b632-26f0-4a12-8012-69b9adcdfb4d\") " pod="openshift-marketplace/redhat-marketplace-hr84x" Dec 08 19:31:09 crc kubenswrapper[5125]: E1208 19:31:09.019584 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:09.519568612 +0000 UTC m=+126.290058876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.020088 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d657b632-26f0-4a12-8012-69b9adcdfb4d-utilities\") pod \"redhat-marketplace-hr84x\" (UID: \"d657b632-26f0-4a12-8012-69b9adcdfb4d\") " pod="openshift-marketplace/redhat-marketplace-hr84x" Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.053792 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq5d4\" (UniqueName: \"kubernetes.io/projected/d657b632-26f0-4a12-8012-69b9adcdfb4d-kube-api-access-xq5d4\") pod \"redhat-marketplace-hr84x\" (UID: \"d657b632-26f0-4a12-8012-69b9adcdfb4d\") " pod="openshift-marketplace/redhat-marketplace-hr84x" Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.074163 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fgxfn"] Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.077912 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fgxfn" Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.085249 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.088004 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fgxfn"] Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.120557 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:09 crc kubenswrapper[5125]: E1208 19:31:09.120804 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:09.620775466 +0000 UTC m=+126.391265750 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.120865 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84e9ab89-5847-44a9-b4d5-11fd35eea65f-catalog-content\") pod \"redhat-operators-fgxfn\" (UID: \"84e9ab89-5847-44a9-b4d5-11fd35eea65f\") " pod="openshift-marketplace/redhat-operators-fgxfn" Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.121042 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84e9ab89-5847-44a9-b4d5-11fd35eea65f-utilities\") pod \"redhat-operators-fgxfn\" (UID: \"84e9ab89-5847-44a9-b4d5-11fd35eea65f\") " pod="openshift-marketplace/redhat-operators-fgxfn" Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.121074 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjd9b\" (UniqueName: \"kubernetes.io/projected/84e9ab89-5847-44a9-b4d5-11fd35eea65f-kube-api-access-pjd9b\") pod \"redhat-operators-fgxfn\" (UID: \"84e9ab89-5847-44a9-b4d5-11fd35eea65f\") " pod="openshift-marketplace/redhat-operators-fgxfn" Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.133856 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hr84x" Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.222002 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84e9ab89-5847-44a9-b4d5-11fd35eea65f-catalog-content\") pod \"redhat-operators-fgxfn\" (UID: \"84e9ab89-5847-44a9-b4d5-11fd35eea65f\") " pod="openshift-marketplace/redhat-operators-fgxfn" Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.222143 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84e9ab89-5847-44a9-b4d5-11fd35eea65f-utilities\") pod \"redhat-operators-fgxfn\" (UID: \"84e9ab89-5847-44a9-b4d5-11fd35eea65f\") " pod="openshift-marketplace/redhat-operators-fgxfn" Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.222170 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pjd9b\" (UniqueName: \"kubernetes.io/projected/84e9ab89-5847-44a9-b4d5-11fd35eea65f-kube-api-access-pjd9b\") pod \"redhat-operators-fgxfn\" (UID: \"84e9ab89-5847-44a9-b4d5-11fd35eea65f\") " pod="openshift-marketplace/redhat-operators-fgxfn" Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.222277 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.222503 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84e9ab89-5847-44a9-b4d5-11fd35eea65f-catalog-content\") pod \"redhat-operators-fgxfn\" (UID: \"84e9ab89-5847-44a9-b4d5-11fd35eea65f\") " pod="openshift-marketplace/redhat-operators-fgxfn" Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.222620 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84e9ab89-5847-44a9-b4d5-11fd35eea65f-utilities\") pod \"redhat-operators-fgxfn\" (UID: \"84e9ab89-5847-44a9-b4d5-11fd35eea65f\") " pod="openshift-marketplace/redhat-operators-fgxfn" Dec 08 19:31:09 crc kubenswrapper[5125]: E1208 19:31:09.222748 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:09.722730161 +0000 UTC m=+126.493220435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.240917 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjd9b\" (UniqueName: \"kubernetes.io/projected/84e9ab89-5847-44a9-b4d5-11fd35eea65f-kube-api-access-pjd9b\") pod \"redhat-operators-fgxfn\" (UID: \"84e9ab89-5847-44a9-b4d5-11fd35eea65f\") " pod="openshift-marketplace/redhat-operators-fgxfn" Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.323885 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:09 crc kubenswrapper[5125]: E1208 19:31:09.324246 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:09.824229373 +0000 UTC m=+126.594719647 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.415343 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hr84x"] Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.425360 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:09 crc kubenswrapper[5125]: E1208 19:31:09.425815 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:09.925795097 +0000 UTC m=+126.696285371 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.448186 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fgxfn" Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.470961 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-692gr"] Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.480596 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-692gr" Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.514838 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-692gr"] Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.525948 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.526195 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f0e14e4-e9cb-4056-a6bb-320825a7a069-catalog-content\") pod \"redhat-operators-692gr\" (UID: \"7f0e14e4-e9cb-4056-a6bb-320825a7a069\") " pod="openshift-marketplace/redhat-operators-692gr" Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.526307 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f0e14e4-e9cb-4056-a6bb-320825a7a069-utilities\") pod \"redhat-operators-692gr\" (UID: \"7f0e14e4-e9cb-4056-a6bb-320825a7a069\") " pod="openshift-marketplace/redhat-operators-692gr" Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.526339 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nznl\" (UniqueName: \"kubernetes.io/projected/7f0e14e4-e9cb-4056-a6bb-320825a7a069-kube-api-access-7nznl\") pod \"redhat-operators-692gr\" (UID: \"7f0e14e4-e9cb-4056-a6bb-320825a7a069\") " pod="openshift-marketplace/redhat-operators-692gr" Dec 08 19:31:09 crc kubenswrapper[5125]: E1208 19:31:09.536837 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.036815694 +0000 UTC m=+126.807305968 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.636827 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-kr9dh" Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.637224 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7nznl\" (UniqueName: \"kubernetes.io/projected/7f0e14e4-e9cb-4056-a6bb-320825a7a069-kube-api-access-7nznl\") pod \"redhat-operators-692gr\" (UID: \"7f0e14e4-e9cb-4056-a6bb-320825a7a069\") " pod="openshift-marketplace/redhat-operators-692gr" Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.637319 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f0e14e4-e9cb-4056-a6bb-320825a7a069-catalog-content\") pod \"redhat-operators-692gr\" (UID: \"7f0e14e4-e9cb-4056-a6bb-320825a7a069\") " pod="openshift-marketplace/redhat-operators-692gr" Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.637364 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.637438 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f0e14e4-e9cb-4056-a6bb-320825a7a069-utilities\") pod \"redhat-operators-692gr\" (UID: \"7f0e14e4-e9cb-4056-a6bb-320825a7a069\") " pod="openshift-marketplace/redhat-operators-692gr" Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.637789 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-kr9dh" Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.638192 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f0e14e4-e9cb-4056-a6bb-320825a7a069-utilities\") pod \"redhat-operators-692gr\" (UID: \"7f0e14e4-e9cb-4056-a6bb-320825a7a069\") " pod="openshift-marketplace/redhat-operators-692gr" Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.638259 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f0e14e4-e9cb-4056-a6bb-320825a7a069-catalog-content\") pod \"redhat-operators-692gr\" (UID: \"7f0e14e4-e9cb-4056-a6bb-320825a7a069\") " pod="openshift-marketplace/redhat-operators-692gr" Dec 08 19:31:09 crc kubenswrapper[5125]: E1208 19:31:09.638414 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.138398429 +0000 UTC m=+126.908888703 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.647555 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-kr9dh" Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.672810 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nznl\" (UniqueName: \"kubernetes.io/projected/7f0e14e4-e9cb-4056-a6bb-320825a7a069-kube-api-access-7nznl\") pod \"redhat-operators-692gr\" (UID: \"7f0e14e4-e9cb-4056-a6bb-320825a7a069\") " pod="openshift-marketplace/redhat-operators-692gr" Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.738255 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:09 crc kubenswrapper[5125]: E1208 19:31:09.738383 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.23835012 +0000 UTC m=+127.008840394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.738480 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:09 crc kubenswrapper[5125]: E1208 19:31:09.740180 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.240170068 +0000 UTC m=+127.010660342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.747695 5125 generic.go:358] "Generic (PLEG): container finished" podID="7a439f4a-1b17-4e9d-a90c-c9278ed75bae" containerID="5f829c3622621813247814694c06b4cf21348a49eace4deb27eda9f14b918873" exitCode=0 Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.747901 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"7a439f4a-1b17-4e9d-a90c-c9278ed75bae","Type":"ContainerDied","Data":"5f829c3622621813247814694c06b4cf21348a49eace4deb27eda9f14b918873"} Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.773775 5125 generic.go:358] "Generic (PLEG): container finished" podID="250d3433-c9c9-4cc2-b0ff-fae4f22615b3" containerID="a038b9e03bb343183e4fd6c6b8e031a8733e2f8501df0156b42cc0f9857009f0" exitCode=0 Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.785827 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hr84x" event={"ID":"d657b632-26f0-4a12-8012-69b9adcdfb4d","Type":"ContainerStarted","Data":"6c5b8b8064299a07352793c68b4cbbfa557d937cd66056c7ab6bc45f7c41e6ca"} Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.785863 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnqn9" event={"ID":"250d3433-c9c9-4cc2-b0ff-fae4f22615b3","Type":"ContainerDied","Data":"a038b9e03bb343183e4fd6c6b8e031a8733e2f8501df0156b42cc0f9857009f0"} Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.785893 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnqn9" event={"ID":"250d3433-c9c9-4cc2-b0ff-fae4f22615b3","Type":"ContainerStarted","Data":"92510f5548b2ab221a44f7b9e35d68d49b55b0c07e3f0b66cd53f16972b28bc3"} Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.809866 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-692gr" Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.839724 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:09 crc kubenswrapper[5125]: E1208 19:31:09.840651 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.340635853 +0000 UTC m=+127.111126127 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:09 crc kubenswrapper[5125]: I1208 19:31:09.942137 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:09 crc kubenswrapper[5125]: E1208 19:31:09.942426 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.442414723 +0000 UTC m=+127.212904997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.038668 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fgxfn"] Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.045457 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:10 crc kubenswrapper[5125]: E1208 19:31:10.045623 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.545582489 +0000 UTC m=+127.316072763 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.045845 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:10 crc kubenswrapper[5125]: E1208 19:31:10.046195 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.546184386 +0000 UTC m=+127.316674660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5125]: W1208 19:31:10.088897 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84e9ab89_5847_44a9_b4d5_11fd35eea65f.slice/crio-b1656d9dc3295ca8035fe6afec300e02e8f23b638bf7eacf6c8fde8fb97f78b3 WatchSource:0}: Error finding container b1656d9dc3295ca8035fe6afec300e02e8f23b638bf7eacf6c8fde8fb97f78b3: Status 404 returned error can't find the container with id b1656d9dc3295ca8035fe6afec300e02e8f23b638bf7eacf6c8fde8fb97f78b3 Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.149255 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:10 crc kubenswrapper[5125]: E1208 19:31:10.149719 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.649703062 +0000 UTC m=+127.420193336 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.251287 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:10 crc kubenswrapper[5125]: E1208 19:31:10.251684 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.751658897 +0000 UTC m=+127.522149171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.357861 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:10 crc kubenswrapper[5125]: E1208 19:31:10.358117 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.85803851 +0000 UTC m=+127.628528784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.358508 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:10 crc kubenswrapper[5125]: E1208 19:31:10.359040 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.859023026 +0000 UTC m=+127.629513300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.387932 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.399416 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.400963 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.401514 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.406453 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.460451 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.460629 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 19:31:10 crc kubenswrapper[5125]: E1208 19:31:10.460659 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:10.960636212 +0000 UTC m=+127.731126486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.460728 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.468720 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-692gr"] Dec 08 19:31:10 crc kubenswrapper[5125]: W1208 19:31:10.475469 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f0e14e4_e9cb_4056_a6bb_320825a7a069.slice/crio-a95f99fa612a0aa5e5f3012c9739484f6c76b46d7604b76de1c0f0399369630e WatchSource:0}: Error finding container a95f99fa612a0aa5e5f3012c9739484f6c76b46d7604b76de1c0f0399369630e: Status 404 returned error can't find the container with id a95f99fa612a0aa5e5f3012c9739484f6c76b46d7604b76de1c0f0399369630e Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.563481 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.563525 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.563547 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 19:31:10 crc kubenswrapper[5125]: E1208 19:31:10.564159 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.064147458 +0000 UTC m=+127.834637732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.564353 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.597403 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.666497 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:10 crc kubenswrapper[5125]: E1208 19:31:10.666965 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.166949925 +0000 UTC m=+127.937440199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.692525 5125 patch_prober.go:28] interesting pod/downloads-747b44746d-t8fbs container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.692582 5125 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-t8fbs" podUID="c46131b3-44f8-4a83-a357-31ca0197d1be" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.725299 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.769631 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:10 crc kubenswrapper[5125]: E1208 19:31:10.769904 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.269891566 +0000 UTC m=+128.040381830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-hgxtj" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.783942 5125 generic.go:358] "Generic (PLEG): container finished" podID="84e9ab89-5847-44a9-b4d5-11fd35eea65f" containerID="d755ce65a9a4a84839788ccc020b80d1a2cb94429fe1e45b3f1891c5730c0cb4" exitCode=0 Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.784100 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fgxfn" event={"ID":"84e9ab89-5847-44a9-b4d5-11fd35eea65f","Type":"ContainerDied","Data":"d755ce65a9a4a84839788ccc020b80d1a2cb94429fe1e45b3f1891c5730c0cb4"} Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.784126 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fgxfn" event={"ID":"84e9ab89-5847-44a9-b4d5-11fd35eea65f","Type":"ContainerStarted","Data":"b1656d9dc3295ca8035fe6afec300e02e8f23b638bf7eacf6c8fde8fb97f78b3"} Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.788196 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jrtpt" event={"ID":"c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1","Type":"ContainerStarted","Data":"f09bc0f24355d6bd6e9aa8aac2c8f97b8a679a50bccbe90d0f11254d8021015c"} Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.796634 5125 generic.go:358] "Generic (PLEG): container finished" podID="d657b632-26f0-4a12-8012-69b9adcdfb4d" containerID="0e92ab16351184b0c5799a81b8430bc93500cab1e65205ad9a630a42113c38aa" exitCode=0 Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.796790 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hr84x" event={"ID":"d657b632-26f0-4a12-8012-69b9adcdfb4d","Type":"ContainerDied","Data":"0e92ab16351184b0c5799a81b8430bc93500cab1e65205ad9a630a42113c38aa"} Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.799919 5125 generic.go:358] "Generic (PLEG): container finished" podID="7f0e14e4-e9cb-4056-a6bb-320825a7a069" containerID="384d2a91ada797587ea0f803ee515614431b5d2ea043bf40416ad323b80e544a" exitCode=0 Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.802699 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-692gr" event={"ID":"7f0e14e4-e9cb-4056-a6bb-320825a7a069","Type":"ContainerDied","Data":"384d2a91ada797587ea0f803ee515614431b5d2ea043bf40416ad323b80e544a"} Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.802737 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-692gr" event={"ID":"7f0e14e4-e9cb-4056-a6bb-320825a7a069","Type":"ContainerStarted","Data":"a95f99fa612a0aa5e5f3012c9739484f6c76b46d7604b76de1c0f0399369630e"} Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.825631 5125 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.870650 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:10 crc kubenswrapper[5125]: E1208 19:31:10.871669 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 19:31:11.371642154 +0000 UTC m=+128.142132428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.886837 5125 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-08T19:31:10.825664772Z","UUID":"6431f67b-7b9d-4b47-8a7d-7296f236750c","Handler":null,"Name":"","Endpoint":""} Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.923208 5125 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.923257 5125 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.973517 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.978197 5125 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 08 19:31:10 crc kubenswrapper[5125]: I1208 19:31:10.978245 5125 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:11 crc kubenswrapper[5125]: I1208 19:31:11.014712 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-hgxtj\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:11 crc kubenswrapper[5125]: I1208 19:31:11.060879 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 08 19:31:11 crc kubenswrapper[5125]: I1208 19:31:11.082738 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 19:31:11 crc kubenswrapper[5125]: I1208 19:31:11.091410 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 08 19:31:11 crc kubenswrapper[5125]: I1208 19:31:11.091427 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 19:31:11 crc kubenswrapper[5125]: I1208 19:31:11.184351 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a439f4a-1b17-4e9d-a90c-c9278ed75bae-kubelet-dir\") pod \"7a439f4a-1b17-4e9d-a90c-c9278ed75bae\" (UID: \"7a439f4a-1b17-4e9d-a90c-c9278ed75bae\") " Dec 08 19:31:11 crc kubenswrapper[5125]: I1208 19:31:11.184513 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a439f4a-1b17-4e9d-a90c-c9278ed75bae-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7a439f4a-1b17-4e9d-a90c-c9278ed75bae" (UID: "7a439f4a-1b17-4e9d-a90c-c9278ed75bae"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:31:11 crc kubenswrapper[5125]: I1208 19:31:11.184724 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a439f4a-1b17-4e9d-a90c-c9278ed75bae-kube-api-access\") pod \"7a439f4a-1b17-4e9d-a90c-c9278ed75bae\" (UID: \"7a439f4a-1b17-4e9d-a90c-c9278ed75bae\") " Dec 08 19:31:11 crc kubenswrapper[5125]: I1208 19:31:11.185430 5125 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a439f4a-1b17-4e9d-a90c-c9278ed75bae-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:11 crc kubenswrapper[5125]: I1208 19:31:11.203581 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a439f4a-1b17-4e9d-a90c-c9278ed75bae-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7a439f4a-1b17-4e9d-a90c-c9278ed75bae" (UID: "7a439f4a-1b17-4e9d-a90c-c9278ed75bae"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:31:11 crc kubenswrapper[5125]: I1208 19:31:11.210462 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 08 19:31:11 crc kubenswrapper[5125]: I1208 19:31:11.219890 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:11 crc kubenswrapper[5125]: I1208 19:31:11.286967 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a439f4a-1b17-4e9d-a90c-c9278ed75bae-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:11 crc kubenswrapper[5125]: I1208 19:31:11.486323 5125 ???:1] "http: TLS handshake error from 192.168.126.11:34774: no serving certificate available for the kubelet" Dec 08 19:31:11 crc kubenswrapper[5125]: I1208 19:31:11.507284 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-hgxtj"] Dec 08 19:31:11 crc kubenswrapper[5125]: I1208 19:31:11.783137 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Dec 08 19:31:11 crc kubenswrapper[5125]: I1208 19:31:11.822918 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8","Type":"ContainerStarted","Data":"5d66179467465c2504a7b2baa045507bf67956cc661b179dd9910c13eeada87e"} Dec 08 19:31:11 crc kubenswrapper[5125]: I1208 19:31:11.822967 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8","Type":"ContainerStarted","Data":"65e787ec3a74c0e3abaa19b03f27a7401307c27ecec5525aa020b21990cb0dc0"} Dec 08 19:31:11 crc kubenswrapper[5125]: I1208 19:31:11.824050 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" event={"ID":"51fe67ff-4e90-4add-8447-58edc3e3d117","Type":"ContainerStarted","Data":"b1273f14d623d5b64bf0c546305c9a4caac9b2d8f44108c924b2ca90e85c7ee1"} Dec 08 19:31:11 crc kubenswrapper[5125]: I1208 19:31:11.826064 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 19:31:11 crc kubenswrapper[5125]: I1208 19:31:11.826073 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"7a439f4a-1b17-4e9d-a90c-c9278ed75bae","Type":"ContainerDied","Data":"b0094cf7f14d83478f1355ea4d8da695c70f73fd1bfdb60598a6a989edf7faec"} Dec 08 19:31:11 crc kubenswrapper[5125]: I1208 19:31:11.826166 5125 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0094cf7f14d83478f1355ea4d8da695c70f73fd1bfdb60598a6a989edf7faec" Dec 08 19:31:11 crc kubenswrapper[5125]: I1208 19:31:11.832205 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jrtpt" event={"ID":"c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1","Type":"ContainerStarted","Data":"c2e3118c044c1e42e840cfe649d49d1fc6fb8febdc44af52a7c89f5ba6098049"} Dec 08 19:31:11 crc kubenswrapper[5125]: I1208 19:31:11.835251 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=1.835240879 podStartE2EDuration="1.835240879s" podCreationTimestamp="2025-12-08 19:31:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:11.834278063 +0000 UTC m=+128.604768337" watchObservedRunningTime="2025-12-08 19:31:11.835240879 +0000 UTC m=+128.605731153" Dec 08 19:31:12 crc kubenswrapper[5125]: I1208 19:31:12.846427 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-jrtpt" event={"ID":"c3ab5a3f-4be2-44ad-9bb9-e7b1d4d99de1","Type":"ContainerStarted","Data":"1bb02c8f18863e3129c7de03214f5f5d3571c8cfd03777492a6d4ce5817bd293"} Dec 08 19:31:12 crc kubenswrapper[5125]: I1208 19:31:12.854694 5125 generic.go:358] "Generic (PLEG): container finished" podID="4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8" containerID="5d66179467465c2504a7b2baa045507bf67956cc661b179dd9910c13eeada87e" exitCode=0 Dec 08 19:31:12 crc kubenswrapper[5125]: I1208 19:31:12.854740 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8","Type":"ContainerDied","Data":"5d66179467465c2504a7b2baa045507bf67956cc661b179dd9910c13eeada87e"} Dec 08 19:31:12 crc kubenswrapper[5125]: I1208 19:31:12.861415 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" event={"ID":"51fe67ff-4e90-4add-8447-58edc3e3d117","Type":"ContainerStarted","Data":"7d7ab317db4ba316a6a3cbedc0934b63bdcc25848b2fb8c144bff1314f1e7532"} Dec 08 19:31:12 crc kubenswrapper[5125]: I1208 19:31:12.861569 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:12 crc kubenswrapper[5125]: I1208 19:31:12.876772 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-jrtpt" podStartSLOduration=16.876748123 podStartE2EDuration="16.876748123s" podCreationTimestamp="2025-12-08 19:30:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:12.869511488 +0000 UTC m=+129.640001782" watchObservedRunningTime="2025-12-08 19:31:12.876748123 +0000 UTC m=+129.647238397" Dec 08 19:31:12 crc kubenswrapper[5125]: I1208 19:31:12.906892 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" podStartSLOduration=109.90687263 podStartE2EDuration="1m49.90687263s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:12.905873263 +0000 UTC m=+129.676363557" watchObservedRunningTime="2025-12-08 19:31:12.90687263 +0000 UTC m=+129.677362904" Dec 08 19:31:14 crc kubenswrapper[5125]: E1208 19:31:14.156384 5125 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b94019e1e3e8ae908735166d6af528e641dd7f24f090a51a6b8577a6548c2ff9" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 19:31:14 crc kubenswrapper[5125]: E1208 19:31:14.161164 5125 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b94019e1e3e8ae908735166d6af528e641dd7f24f090a51a6b8577a6548c2ff9" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 19:31:14 crc kubenswrapper[5125]: E1208 19:31:14.166314 5125 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b94019e1e3e8ae908735166d6af528e641dd7f24f090a51a6b8577a6548c2ff9" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 19:31:14 crc kubenswrapper[5125]: E1208 19:31:14.166454 5125 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-x7zl6" podUID="7d3d93c9-073e-4463-ad22-0dc846df2d84" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 19:31:14 crc kubenswrapper[5125]: I1208 19:31:14.801113 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 19:31:14 crc kubenswrapper[5125]: I1208 19:31:14.879653 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8","Type":"ContainerDied","Data":"65e787ec3a74c0e3abaa19b03f27a7401307c27ecec5525aa020b21990cb0dc0"} Dec 08 19:31:14 crc kubenswrapper[5125]: I1208 19:31:14.879678 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 19:31:14 crc kubenswrapper[5125]: I1208 19:31:14.879699 5125 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65e787ec3a74c0e3abaa19b03f27a7401307c27ecec5525aa020b21990cb0dc0" Dec 08 19:31:14 crc kubenswrapper[5125]: I1208 19:31:14.966171 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8-kube-api-access\") pod \"4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8\" (UID: \"4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8\") " Dec 08 19:31:14 crc kubenswrapper[5125]: I1208 19:31:14.966534 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8-kubelet-dir\") pod \"4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8\" (UID: \"4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8\") " Dec 08 19:31:14 crc kubenswrapper[5125]: I1208 19:31:14.966776 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8" (UID: "4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:31:14 crc kubenswrapper[5125]: I1208 19:31:14.973883 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8" (UID: "4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:31:15 crc kubenswrapper[5125]: I1208 19:31:15.068792 5125 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:15 crc kubenswrapper[5125]: I1208 19:31:15.068833 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:15 crc kubenswrapper[5125]: I1208 19:31:15.599158 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-75h8s" Dec 08 19:31:15 crc kubenswrapper[5125]: I1208 19:31:15.634402 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-bhnwz" Dec 08 19:31:16 crc kubenswrapper[5125]: I1208 19:31:16.631246 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-t8fbs" Dec 08 19:31:16 crc kubenswrapper[5125]: I1208 19:31:16.996044 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:31:18 crc kubenswrapper[5125]: I1208 19:31:18.492927 5125 patch_prober.go:28] interesting pod/console-64d44f6ddf-cdw7h container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 08 19:31:18 crc kubenswrapper[5125]: I1208 19:31:18.492996 5125 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-cdw7h" podUID="92837ccf-1e39-495e-bbcb-d3eaafd95d15" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 08 19:31:21 crc kubenswrapper[5125]: I1208 19:31:21.752482 5125 ???:1] "http: TLS handshake error from 192.168.126.11:48140: no serving certificate available for the kubelet" Dec 08 19:31:24 crc kubenswrapper[5125]: E1208 19:31:24.154777 5125 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b94019e1e3e8ae908735166d6af528e641dd7f24f090a51a6b8577a6548c2ff9" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 19:31:24 crc kubenswrapper[5125]: E1208 19:31:24.156392 5125 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b94019e1e3e8ae908735166d6af528e641dd7f24f090a51a6b8577a6548c2ff9" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 19:31:24 crc kubenswrapper[5125]: E1208 19:31:24.157437 5125 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b94019e1e3e8ae908735166d6af528e641dd7f24f090a51a6b8577a6548c2ff9" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 19:31:24 crc kubenswrapper[5125]: E1208 19:31:24.157479 5125 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-x7zl6" podUID="7d3d93c9-073e-4463-ad22-0dc846df2d84" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 19:31:24 crc kubenswrapper[5125]: I1208 19:31:24.268213 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:31:24 crc kubenswrapper[5125]: I1208 19:31:24.616263 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:31:24 crc kubenswrapper[5125]: I1208 19:31:24.616353 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:31:24 crc kubenswrapper[5125]: I1208 19:31:24.616389 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:31:24 crc kubenswrapper[5125]: I1208 19:31:24.618839 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 08 19:31:24 crc kubenswrapper[5125]: I1208 19:31:24.619016 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 08 19:31:24 crc kubenswrapper[5125]: I1208 19:31:24.619489 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 08 19:31:24 crc kubenswrapper[5125]: I1208 19:31:24.629194 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 08 19:31:24 crc kubenswrapper[5125]: I1208 19:31:24.635412 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:31:24 crc kubenswrapper[5125]: I1208 19:31:24.641027 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:31:24 crc kubenswrapper[5125]: I1208 19:31:24.717988 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a677937-278d-4989-b196-40d5daba436d-metrics-certs\") pod \"network-metrics-daemon-7lwbz\" (UID: \"9a677937-278d-4989-b196-40d5daba436d\") " pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:31:24 crc kubenswrapper[5125]: I1208 19:31:24.718653 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:31:24 crc kubenswrapper[5125]: I1208 19:31:24.720261 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 08 19:31:24 crc kubenswrapper[5125]: I1208 19:31:24.727476 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:31:24 crc kubenswrapper[5125]: I1208 19:31:24.734157 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a677937-278d-4989-b196-40d5daba436d-metrics-certs\") pod \"network-metrics-daemon-7lwbz\" (UID: \"9a677937-278d-4989-b196-40d5daba436d\") " pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:31:24 crc kubenswrapper[5125]: I1208 19:31:24.840008 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:31:24 crc kubenswrapper[5125]: I1208 19:31:24.898031 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:31:24 crc kubenswrapper[5125]: I1208 19:31:24.934554 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 19:31:24 crc kubenswrapper[5125]: I1208 19:31:24.935308 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 19:31:24 crc kubenswrapper[5125]: I1208 19:31:24.952658 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 08 19:31:24 crc kubenswrapper[5125]: I1208 19:31:24.961669 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7lwbz" Dec 08 19:31:28 crc kubenswrapper[5125]: I1208 19:31:28.497080 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-cdw7h" Dec 08 19:31:28 crc kubenswrapper[5125]: I1208 19:31:28.500700 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-cdw7h" Dec 08 19:31:29 crc kubenswrapper[5125]: W1208 19:31:29.634929 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a9ae5f6_97bd_46ac_bafa_ca1b4452a141.slice/crio-22db56c2f89430c84826ca02fe6051603e52520c070791d0efb59c05829a9066 WatchSource:0}: Error finding container 22db56c2f89430c84826ca02fe6051603e52520c070791d0efb59c05829a9066: Status 404 returned error can't find the container with id 22db56c2f89430c84826ca02fe6051603e52520c070791d0efb59c05829a9066 Dec 08 19:31:29 crc kubenswrapper[5125]: I1208 19:31:29.648209 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7lwbz"] Dec 08 19:31:29 crc kubenswrapper[5125]: W1208 19:31:29.657792 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf863fff9_286a_45fa_b8f0_8a86994b8440.slice/crio-40db34f006f468cf35ccb0a928207d8ae8c03ba847b125440c7255f8b1ac7673 WatchSource:0}: Error finding container 40db34f006f468cf35ccb0a928207d8ae8c03ba847b125440c7255f8b1ac7673: Status 404 returned error can't find the container with id 40db34f006f468cf35ccb0a928207d8ae8c03ba847b125440c7255f8b1ac7673 Dec 08 19:31:29 crc kubenswrapper[5125]: W1208 19:31:29.681587 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a677937_278d_4989_b196_40d5daba436d.slice/crio-90f6b103661c838053ba88ac8207580b568d3c6ab4c7a06a2e3765f561a26e81 WatchSource:0}: Error finding container 90f6b103661c838053ba88ac8207580b568d3c6ab4c7a06a2e3765f561a26e81: Status 404 returned error can't find the container with id 90f6b103661c838053ba88ac8207580b568d3c6ab4c7a06a2e3765f561a26e81 Dec 08 19:31:29 crc kubenswrapper[5125]: I1208 19:31:29.979078 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"22db56c2f89430c84826ca02fe6051603e52520c070791d0efb59c05829a9066"} Dec 08 19:31:29 crc kubenswrapper[5125]: I1208 19:31:29.982024 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"40db34f006f468cf35ccb0a928207d8ae8c03ba847b125440c7255f8b1ac7673"} Dec 08 19:31:29 crc kubenswrapper[5125]: I1208 19:31:29.984485 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mspl5" event={"ID":"d85490a5-7e2e-41c2-8a79-fdfbe3767877","Type":"ContainerStarted","Data":"6c922c4dd34d4217d01179204f6bd3c649fb1fdaee2563c294f5d561fe44c84d"} Dec 08 19:31:29 crc kubenswrapper[5125]: I1208 19:31:29.986474 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gs6mc" event={"ID":"9e9aba28-961e-4643-92d8-d718748862c6","Type":"ContainerStarted","Data":"924e10d5e2e5ef1151ff56234a047c917d09c730c668dacaef22c2a8cc93dfcf"} Dec 08 19:31:29 crc kubenswrapper[5125]: I1208 19:31:29.988180 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7lwbz" event={"ID":"9a677937-278d-4989-b196-40d5daba436d","Type":"ContainerStarted","Data":"90f6b103661c838053ba88ac8207580b568d3c6ab4c7a06a2e3765f561a26e81"} Dec 08 19:31:29 crc kubenswrapper[5125]: I1208 19:31:29.989381 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"95ea1e370b6e46f02c090e384c96eb31e8eb7bed078f9787adc920fb8c81b913"} Dec 08 19:31:29 crc kubenswrapper[5125]: I1208 19:31:29.992473 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5dng" event={"ID":"edf1ad5e-15fa-4885-be31-4124514570a1","Type":"ContainerStarted","Data":"4df8ea5564a87cecfac6bc008c168495ad1f9229bfa17e8970a352d081700dc0"} Dec 08 19:31:29 crc kubenswrapper[5125]: I1208 19:31:29.995187 5125 generic.go:358] "Generic (PLEG): container finished" podID="d657b632-26f0-4a12-8012-69b9adcdfb4d" containerID="e718023362befb3d2a6f322a96806656d93cfaba43d1dc3534a638706b2a7ca7" exitCode=0 Dec 08 19:31:29 crc kubenswrapper[5125]: I1208 19:31:29.995253 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hr84x" event={"ID":"d657b632-26f0-4a12-8012-69b9adcdfb4d","Type":"ContainerDied","Data":"e718023362befb3d2a6f322a96806656d93cfaba43d1dc3534a638706b2a7ca7"} Dec 08 19:31:29 crc kubenswrapper[5125]: I1208 19:31:29.998153 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-692gr" event={"ID":"7f0e14e4-e9cb-4056-a6bb-320825a7a069","Type":"ContainerStarted","Data":"5ea6a05bd5769663fd159e6c1bb044daf2eb85ed7544ddad6f5817224125cb9d"} Dec 08 19:31:30 crc kubenswrapper[5125]: I1208 19:31:30.009951 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnqn9" event={"ID":"250d3433-c9c9-4cc2-b0ff-fae4f22615b3","Type":"ContainerStarted","Data":"440f23ce25b2e51813b469e6fe7252478cb27152fa938eee16281b7c3cd64926"} Dec 08 19:31:31 crc kubenswrapper[5125]: I1208 19:31:31.019827 5125 generic.go:358] "Generic (PLEG): container finished" podID="9e9aba28-961e-4643-92d8-d718748862c6" containerID="924e10d5e2e5ef1151ff56234a047c917d09c730c668dacaef22c2a8cc93dfcf" exitCode=0 Dec 08 19:31:31 crc kubenswrapper[5125]: I1208 19:31:31.020012 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gs6mc" event={"ID":"9e9aba28-961e-4643-92d8-d718748862c6","Type":"ContainerDied","Data":"924e10d5e2e5ef1151ff56234a047c917d09c730c668dacaef22c2a8cc93dfcf"} Dec 08 19:31:31 crc kubenswrapper[5125]: I1208 19:31:31.023657 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7lwbz" event={"ID":"9a677937-278d-4989-b196-40d5daba436d","Type":"ContainerStarted","Data":"b653c083a600b5dd6b4413b387b455c69b353cd50d2767d97e423f52a3cf9490"} Dec 08 19:31:31 crc kubenswrapper[5125]: I1208 19:31:31.023709 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7lwbz" event={"ID":"9a677937-278d-4989-b196-40d5daba436d","Type":"ContainerStarted","Data":"8c54a5bfd7b8e029779090d10509eb77616976fab7d41e27a644fee5559b733c"} Dec 08 19:31:31 crc kubenswrapper[5125]: I1208 19:31:31.035274 5125 generic.go:358] "Generic (PLEG): container finished" podID="e29013b4-d624-4a56-804d-c5bf83a0db40" containerID="7069a8af77aa4c34a7312dbad524361c858b0400ad0be8b1fb5beae0983614fe" exitCode=0 Dec 08 19:31:31 crc kubenswrapper[5125]: I1208 19:31:31.035656 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgkwp" event={"ID":"e29013b4-d624-4a56-804d-c5bf83a0db40","Type":"ContainerDied","Data":"7069a8af77aa4c34a7312dbad524361c858b0400ad0be8b1fb5beae0983614fe"} Dec 08 19:31:31 crc kubenswrapper[5125]: I1208 19:31:31.037588 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"d9769e1c4e971a91b184b9570312c5baecec1ccb2409a953a15d6ee6aa0cdf0e"} Dec 08 19:31:31 crc kubenswrapper[5125]: I1208 19:31:31.037892 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:31:31 crc kubenswrapper[5125]: I1208 19:31:31.039782 5125 generic.go:358] "Generic (PLEG): container finished" podID="edf1ad5e-15fa-4885-be31-4124514570a1" containerID="4df8ea5564a87cecfac6bc008c168495ad1f9229bfa17e8970a352d081700dc0" exitCode=0 Dec 08 19:31:31 crc kubenswrapper[5125]: I1208 19:31:31.040139 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5dng" event={"ID":"edf1ad5e-15fa-4885-be31-4124514570a1","Type":"ContainerDied","Data":"4df8ea5564a87cecfac6bc008c168495ad1f9229bfa17e8970a352d081700dc0"} Dec 08 19:31:31 crc kubenswrapper[5125]: I1208 19:31:31.042892 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hr84x" event={"ID":"d657b632-26f0-4a12-8012-69b9adcdfb4d","Type":"ContainerStarted","Data":"d31595c4b2f3720890d6fe04de11652bfd185eac174766712aca3de5037239b6"} Dec 08 19:31:31 crc kubenswrapper[5125]: I1208 19:31:31.047069 5125 generic.go:358] "Generic (PLEG): container finished" podID="7f0e14e4-e9cb-4056-a6bb-320825a7a069" containerID="5ea6a05bd5769663fd159e6c1bb044daf2eb85ed7544ddad6f5817224125cb9d" exitCode=0 Dec 08 19:31:31 crc kubenswrapper[5125]: I1208 19:31:31.047261 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-692gr" event={"ID":"7f0e14e4-e9cb-4056-a6bb-320825a7a069","Type":"ContainerDied","Data":"5ea6a05bd5769663fd159e6c1bb044daf2eb85ed7544ddad6f5817224125cb9d"} Dec 08 19:31:31 crc kubenswrapper[5125]: I1208 19:31:31.053920 5125 generic.go:358] "Generic (PLEG): container finished" podID="250d3433-c9c9-4cc2-b0ff-fae4f22615b3" containerID="440f23ce25b2e51813b469e6fe7252478cb27152fa938eee16281b7c3cd64926" exitCode=0 Dec 08 19:31:31 crc kubenswrapper[5125]: I1208 19:31:31.054002 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnqn9" event={"ID":"250d3433-c9c9-4cc2-b0ff-fae4f22615b3","Type":"ContainerDied","Data":"440f23ce25b2e51813b469e6fe7252478cb27152fa938eee16281b7c3cd64926"} Dec 08 19:31:31 crc kubenswrapper[5125]: I1208 19:31:31.058434 5125 generic.go:358] "Generic (PLEG): container finished" podID="84e9ab89-5847-44a9-b4d5-11fd35eea65f" containerID="b820d5bf6cad23411b3f27e588ffd5e06f01d2058473ecbd5324bf8c6447f307" exitCode=0 Dec 08 19:31:31 crc kubenswrapper[5125]: I1208 19:31:31.058528 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fgxfn" event={"ID":"84e9ab89-5847-44a9-b4d5-11fd35eea65f","Type":"ContainerDied","Data":"b820d5bf6cad23411b3f27e588ffd5e06f01d2058473ecbd5324bf8c6447f307"} Dec 08 19:31:31 crc kubenswrapper[5125]: I1208 19:31:31.066382 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"3b1e3f49571385218383edcd8fe1a07f5b5ac0bbba9bab5968dfb8c575d90aa1"} Dec 08 19:31:31 crc kubenswrapper[5125]: I1208 19:31:31.069229 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"293099d3ab52760087e1ac3877aefa1cd0b6973eb9ef80cdbbb6964ad247a587"} Dec 08 19:31:31 crc kubenswrapper[5125]: I1208 19:31:31.071626 5125 generic.go:358] "Generic (PLEG): container finished" podID="d85490a5-7e2e-41c2-8a79-fdfbe3767877" containerID="6c922c4dd34d4217d01179204f6bd3c649fb1fdaee2563c294f5d561fe44c84d" exitCode=0 Dec 08 19:31:31 crc kubenswrapper[5125]: I1208 19:31:31.071701 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mspl5" event={"ID":"d85490a5-7e2e-41c2-8a79-fdfbe3767877","Type":"ContainerDied","Data":"6c922c4dd34d4217d01179204f6bd3c649fb1fdaee2563c294f5d561fe44c84d"} Dec 08 19:31:31 crc kubenswrapper[5125]: I1208 19:31:31.099145 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-7lwbz" podStartSLOduration=128.099128664 podStartE2EDuration="2m8.099128664s" podCreationTimestamp="2025-12-08 19:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:31.096637147 +0000 UTC m=+147.867127451" watchObservedRunningTime="2025-12-08 19:31:31.099128664 +0000 UTC m=+147.869618938" Dec 08 19:31:31 crc kubenswrapper[5125]: I1208 19:31:31.115719 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hr84x" podStartSLOduration=4.714591728 podStartE2EDuration="23.115701081s" podCreationTimestamp="2025-12-08 19:31:08 +0000 UTC" firstStartedPulling="2025-12-08 19:31:10.797488456 +0000 UTC m=+127.567978730" lastFinishedPulling="2025-12-08 19:31:29.198597809 +0000 UTC m=+145.969088083" observedRunningTime="2025-12-08 19:31:31.11081385 +0000 UTC m=+147.881304144" watchObservedRunningTime="2025-12-08 19:31:31.115701081 +0000 UTC m=+147.886191345" Dec 08 19:31:32 crc kubenswrapper[5125]: I1208 19:31:32.081552 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5dng" event={"ID":"edf1ad5e-15fa-4885-be31-4124514570a1","Type":"ContainerStarted","Data":"8c7f66b7389391cb20133fc19153f3525e40584c9118f824faff3c7626c47e49"} Dec 08 19:31:32 crc kubenswrapper[5125]: I1208 19:31:32.086655 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-692gr" event={"ID":"7f0e14e4-e9cb-4056-a6bb-320825a7a069","Type":"ContainerStarted","Data":"146d7b3f8a4beacbb9cbf12333032fde5cc05be086e8c0df72f7e18f5eed9831"} Dec 08 19:31:32 crc kubenswrapper[5125]: I1208 19:31:32.088705 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnqn9" event={"ID":"250d3433-c9c9-4cc2-b0ff-fae4f22615b3","Type":"ContainerStarted","Data":"edec98544f516fe01b992547c72e105c98d98ee479f25f01acd725ce56e6f9c3"} Dec 08 19:31:32 crc kubenswrapper[5125]: I1208 19:31:32.090909 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fgxfn" event={"ID":"84e9ab89-5847-44a9-b4d5-11fd35eea65f","Type":"ContainerStarted","Data":"ff79163aee5978f0e25125c23263668107ae17d488fe6d6099be451c47d26c98"} Dec 08 19:31:32 crc kubenswrapper[5125]: I1208 19:31:32.096130 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mspl5" event={"ID":"d85490a5-7e2e-41c2-8a79-fdfbe3767877","Type":"ContainerStarted","Data":"5c065c1ff06bec2a5fe660810e2bee9befdd59a1f9167440b1ec7116bfd199fb"} Dec 08 19:31:32 crc kubenswrapper[5125]: I1208 19:31:32.098642 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gs6mc" event={"ID":"9e9aba28-961e-4643-92d8-d718748862c6","Type":"ContainerStarted","Data":"39057419e2efc66299ed5b859d40e1267fa834f80e7259b9e3c0df86a7c20f26"} Dec 08 19:31:32 crc kubenswrapper[5125]: I1208 19:31:32.099179 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c5dng" podStartSLOduration=5.693824745 podStartE2EDuration="26.099168353s" podCreationTimestamp="2025-12-08 19:31:06 +0000 UTC" firstStartedPulling="2025-12-08 19:31:08.795717428 +0000 UTC m=+125.566207702" lastFinishedPulling="2025-12-08 19:31:29.201061016 +0000 UTC m=+145.971551310" observedRunningTime="2025-12-08 19:31:32.099141823 +0000 UTC m=+148.869632117" watchObservedRunningTime="2025-12-08 19:31:32.099168353 +0000 UTC m=+148.869658627" Dec 08 19:31:32 crc kubenswrapper[5125]: I1208 19:31:32.103859 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgkwp" event={"ID":"e29013b4-d624-4a56-804d-c5bf83a0db40","Type":"ContainerStarted","Data":"28baee5da304f768bfef8d0e818147c013ea98643cdcc78d9689579cc4b143b9"} Dec 08 19:31:32 crc kubenswrapper[5125]: I1208 19:31:32.118694 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gs6mc" podStartSLOduration=6.713029133 podStartE2EDuration="27.118673329s" podCreationTimestamp="2025-12-08 19:31:05 +0000 UTC" firstStartedPulling="2025-12-08 19:31:08.797466855 +0000 UTC m=+125.567957129" lastFinishedPulling="2025-12-08 19:31:29.203111061 +0000 UTC m=+145.973601325" observedRunningTime="2025-12-08 19:31:32.1153607 +0000 UTC m=+148.885850974" watchObservedRunningTime="2025-12-08 19:31:32.118673329 +0000 UTC m=+148.889163623" Dec 08 19:31:32 crc kubenswrapper[5125]: I1208 19:31:32.165782 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cnqn9" podStartSLOduration=4.51584137 podStartE2EDuration="24.165764799s" podCreationTimestamp="2025-12-08 19:31:08 +0000 UTC" firstStartedPulling="2025-12-08 19:31:09.780014357 +0000 UTC m=+126.550504631" lastFinishedPulling="2025-12-08 19:31:29.429937796 +0000 UTC m=+146.200428060" observedRunningTime="2025-12-08 19:31:32.138441162 +0000 UTC m=+148.908931456" watchObservedRunningTime="2025-12-08 19:31:32.165764799 +0000 UTC m=+148.936255073" Dec 08 19:31:32 crc kubenswrapper[5125]: I1208 19:31:32.167691 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fgxfn" podStartSLOduration=4.50171888 podStartE2EDuration="23.16767158s" podCreationTimestamp="2025-12-08 19:31:09 +0000 UTC" firstStartedPulling="2025-12-08 19:31:10.78531256 +0000 UTC m=+127.555802834" lastFinishedPulling="2025-12-08 19:31:29.45126526 +0000 UTC m=+146.221755534" observedRunningTime="2025-12-08 19:31:32.163833727 +0000 UTC m=+148.934324021" watchObservedRunningTime="2025-12-08 19:31:32.16767158 +0000 UTC m=+148.938161864" Dec 08 19:31:32 crc kubenswrapper[5125]: I1208 19:31:32.188658 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-692gr" podStartSLOduration=4.787260215 podStartE2EDuration="23.188641075s" podCreationTimestamp="2025-12-08 19:31:09 +0000 UTC" firstStartedPulling="2025-12-08 19:31:10.800832886 +0000 UTC m=+127.571323160" lastFinishedPulling="2025-12-08 19:31:29.202213736 +0000 UTC m=+145.972704020" observedRunningTime="2025-12-08 19:31:32.185775628 +0000 UTC m=+148.956265922" watchObservedRunningTime="2025-12-08 19:31:32.188641075 +0000 UTC m=+148.959131349" Dec 08 19:31:32 crc kubenswrapper[5125]: I1208 19:31:32.235085 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hgkwp" podStartSLOduration=5.576978386 podStartE2EDuration="26.235065927s" podCreationTimestamp="2025-12-08 19:31:06 +0000 UTC" firstStartedPulling="2025-12-08 19:31:08.795129092 +0000 UTC m=+125.565619366" lastFinishedPulling="2025-12-08 19:31:29.453216633 +0000 UTC m=+146.223706907" observedRunningTime="2025-12-08 19:31:32.232379225 +0000 UTC m=+149.002869519" watchObservedRunningTime="2025-12-08 19:31:32.235065927 +0000 UTC m=+149.005556201" Dec 08 19:31:32 crc kubenswrapper[5125]: I1208 19:31:32.235215 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mspl5" podStartSLOduration=5.602144235 podStartE2EDuration="26.235208882s" podCreationTimestamp="2025-12-08 19:31:06 +0000 UTC" firstStartedPulling="2025-12-08 19:31:08.796341605 +0000 UTC m=+125.566831889" lastFinishedPulling="2025-12-08 19:31:29.429406222 +0000 UTC m=+146.199896536" observedRunningTime="2025-12-08 19:31:32.215368726 +0000 UTC m=+148.985859020" watchObservedRunningTime="2025-12-08 19:31:32.235208882 +0000 UTC m=+149.005699176" Dec 08 19:31:33 crc kubenswrapper[5125]: I1208 19:31:33.880743 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:31:34 crc kubenswrapper[5125]: E1208 19:31:34.154364 5125 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b94019e1e3e8ae908735166d6af528e641dd7f24f090a51a6b8577a6548c2ff9" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 19:31:34 crc kubenswrapper[5125]: E1208 19:31:34.155852 5125 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b94019e1e3e8ae908735166d6af528e641dd7f24f090a51a6b8577a6548c2ff9" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 19:31:34 crc kubenswrapper[5125]: E1208 19:31:34.157416 5125 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b94019e1e3e8ae908735166d6af528e641dd7f24f090a51a6b8577a6548c2ff9" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 19:31:34 crc kubenswrapper[5125]: E1208 19:31:34.157472 5125 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-x7zl6" podUID="7d3d93c9-073e-4463-ad22-0dc846df2d84" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 19:31:36 crc kubenswrapper[5125]: I1208 19:31:36.227511 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gs6mc" Dec 08 19:31:36 crc kubenswrapper[5125]: I1208 19:31:36.228525 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-gs6mc" Dec 08 19:31:36 crc kubenswrapper[5125]: I1208 19:31:36.404323 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c5dng" Dec 08 19:31:36 crc kubenswrapper[5125]: I1208 19:31:36.404707 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-c5dng" Dec 08 19:31:36 crc kubenswrapper[5125]: I1208 19:31:36.554292 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c5dng" Dec 08 19:31:36 crc kubenswrapper[5125]: I1208 19:31:36.559154 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gs6mc" Dec 08 19:31:36 crc kubenswrapper[5125]: I1208 19:31:36.620537 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hgkwp" Dec 08 19:31:36 crc kubenswrapper[5125]: I1208 19:31:36.620582 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-hgkwp" Dec 08 19:31:36 crc kubenswrapper[5125]: I1208 19:31:36.632082 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-4jn6q" Dec 08 19:31:36 crc kubenswrapper[5125]: I1208 19:31:36.659564 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hgkwp" Dec 08 19:31:36 crc kubenswrapper[5125]: I1208 19:31:36.857458 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mspl5" Dec 08 19:31:36 crc kubenswrapper[5125]: I1208 19:31:36.858316 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-mspl5" Dec 08 19:31:36 crc kubenswrapper[5125]: I1208 19:31:36.901706 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mspl5" Dec 08 19:31:37 crc kubenswrapper[5125]: I1208 19:31:37.164246 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c5dng" Dec 08 19:31:37 crc kubenswrapper[5125]: I1208 19:31:37.164600 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hgkwp" Dec 08 19:31:37 crc kubenswrapper[5125]: I1208 19:31:37.173407 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gs6mc" Dec 08 19:31:37 crc kubenswrapper[5125]: I1208 19:31:37.725764 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mspl5" Dec 08 19:31:37 crc kubenswrapper[5125]: I1208 19:31:37.885413 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-x7zl6_7d3d93c9-073e-4463-ad22-0dc846df2d84/kube-multus-additional-cni-plugins/0.log" Dec 08 19:31:37 crc kubenswrapper[5125]: I1208 19:31:37.885505 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-x7zl6" Dec 08 19:31:38 crc kubenswrapper[5125]: I1208 19:31:38.013487 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/7d3d93c9-073e-4463-ad22-0dc846df2d84-ready\") pod \"7d3d93c9-073e-4463-ad22-0dc846df2d84\" (UID: \"7d3d93c9-073e-4463-ad22-0dc846df2d84\") " Dec 08 19:31:38 crc kubenswrapper[5125]: I1208 19:31:38.013555 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7d3d93c9-073e-4463-ad22-0dc846df2d84-cni-sysctl-allowlist\") pod \"7d3d93c9-073e-4463-ad22-0dc846df2d84\" (UID: \"7d3d93c9-073e-4463-ad22-0dc846df2d84\") " Dec 08 19:31:38 crc kubenswrapper[5125]: I1208 19:31:38.013739 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkls9\" (UniqueName: \"kubernetes.io/projected/7d3d93c9-073e-4463-ad22-0dc846df2d84-kube-api-access-mkls9\") pod \"7d3d93c9-073e-4463-ad22-0dc846df2d84\" (UID: \"7d3d93c9-073e-4463-ad22-0dc846df2d84\") " Dec 08 19:31:38 crc kubenswrapper[5125]: I1208 19:31:38.013765 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7d3d93c9-073e-4463-ad22-0dc846df2d84-tuning-conf-dir\") pod \"7d3d93c9-073e-4463-ad22-0dc846df2d84\" (UID: \"7d3d93c9-073e-4463-ad22-0dc846df2d84\") " Dec 08 19:31:38 crc kubenswrapper[5125]: I1208 19:31:38.014006 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d3d93c9-073e-4463-ad22-0dc846df2d84-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "7d3d93c9-073e-4463-ad22-0dc846df2d84" (UID: "7d3d93c9-073e-4463-ad22-0dc846df2d84"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:31:38 crc kubenswrapper[5125]: I1208 19:31:38.014011 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d3d93c9-073e-4463-ad22-0dc846df2d84-ready" (OuterVolumeSpecName: "ready") pod "7d3d93c9-073e-4463-ad22-0dc846df2d84" (UID: "7d3d93c9-073e-4463-ad22-0dc846df2d84"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:31:38 crc kubenswrapper[5125]: I1208 19:31:38.014269 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d3d93c9-073e-4463-ad22-0dc846df2d84-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7d3d93c9-073e-4463-ad22-0dc846df2d84" (UID: "7d3d93c9-073e-4463-ad22-0dc846df2d84"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:31:38 crc kubenswrapper[5125]: I1208 19:31:38.021350 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d3d93c9-073e-4463-ad22-0dc846df2d84-kube-api-access-mkls9" (OuterVolumeSpecName: "kube-api-access-mkls9") pod "7d3d93c9-073e-4463-ad22-0dc846df2d84" (UID: "7d3d93c9-073e-4463-ad22-0dc846df2d84"). InnerVolumeSpecName "kube-api-access-mkls9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:31:38 crc kubenswrapper[5125]: I1208 19:31:38.115484 5125 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7d3d93c9-073e-4463-ad22-0dc846df2d84-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:38 crc kubenswrapper[5125]: I1208 19:31:38.115519 5125 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/7d3d93c9-073e-4463-ad22-0dc846df2d84-ready\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:38 crc kubenswrapper[5125]: I1208 19:31:38.115530 5125 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7d3d93c9-073e-4463-ad22-0dc846df2d84-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:38 crc kubenswrapper[5125]: I1208 19:31:38.115540 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mkls9\" (UniqueName: \"kubernetes.io/projected/7d3d93c9-073e-4463-ad22-0dc846df2d84-kube-api-access-mkls9\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:38 crc kubenswrapper[5125]: I1208 19:31:38.132152 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-x7zl6_7d3d93c9-073e-4463-ad22-0dc846df2d84/kube-multus-additional-cni-plugins/0.log" Dec 08 19:31:38 crc kubenswrapper[5125]: I1208 19:31:38.132188 5125 generic.go:358] "Generic (PLEG): container finished" podID="7d3d93c9-073e-4463-ad22-0dc846df2d84" containerID="b94019e1e3e8ae908735166d6af528e641dd7f24f090a51a6b8577a6548c2ff9" exitCode=137 Dec 08 19:31:38 crc kubenswrapper[5125]: I1208 19:31:38.132343 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-x7zl6" event={"ID":"7d3d93c9-073e-4463-ad22-0dc846df2d84","Type":"ContainerDied","Data":"b94019e1e3e8ae908735166d6af528e641dd7f24f090a51a6b8577a6548c2ff9"} Dec 08 19:31:38 crc kubenswrapper[5125]: I1208 19:31:38.132416 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-x7zl6" event={"ID":"7d3d93c9-073e-4463-ad22-0dc846df2d84","Type":"ContainerDied","Data":"673c95f41551e5a0fa3b7e72473a2f765688c82ff897f75ebe851f7d8726c49e"} Dec 08 19:31:38 crc kubenswrapper[5125]: I1208 19:31:38.132438 5125 scope.go:117] "RemoveContainer" containerID="b94019e1e3e8ae908735166d6af528e641dd7f24f090a51a6b8577a6548c2ff9" Dec 08 19:31:38 crc kubenswrapper[5125]: I1208 19:31:38.132542 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-x7zl6" Dec 08 19:31:38 crc kubenswrapper[5125]: I1208 19:31:38.151723 5125 scope.go:117] "RemoveContainer" containerID="b94019e1e3e8ae908735166d6af528e641dd7f24f090a51a6b8577a6548c2ff9" Dec 08 19:31:38 crc kubenswrapper[5125]: E1208 19:31:38.152159 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b94019e1e3e8ae908735166d6af528e641dd7f24f090a51a6b8577a6548c2ff9\": container with ID starting with b94019e1e3e8ae908735166d6af528e641dd7f24f090a51a6b8577a6548c2ff9 not found: ID does not exist" containerID="b94019e1e3e8ae908735166d6af528e641dd7f24f090a51a6b8577a6548c2ff9" Dec 08 19:31:38 crc kubenswrapper[5125]: I1208 19:31:38.152202 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b94019e1e3e8ae908735166d6af528e641dd7f24f090a51a6b8577a6548c2ff9"} err="failed to get container status \"b94019e1e3e8ae908735166d6af528e641dd7f24f090a51a6b8577a6548c2ff9\": rpc error: code = NotFound desc = could not find container \"b94019e1e3e8ae908735166d6af528e641dd7f24f090a51a6b8577a6548c2ff9\": container with ID starting with b94019e1e3e8ae908735166d6af528e641dd7f24f090a51a6b8577a6548c2ff9 not found: ID does not exist" Dec 08 19:31:38 crc kubenswrapper[5125]: I1208 19:31:38.160681 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-x7zl6"] Dec 08 19:31:38 crc kubenswrapper[5125]: I1208 19:31:38.163783 5125 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-x7zl6"] Dec 08 19:31:38 crc kubenswrapper[5125]: I1208 19:31:38.398689 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cnqn9" Dec 08 19:31:38 crc kubenswrapper[5125]: I1208 19:31:38.398733 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-cnqn9" Dec 08 19:31:38 crc kubenswrapper[5125]: I1208 19:31:38.440859 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cnqn9" Dec 08 19:31:39 crc kubenswrapper[5125]: I1208 19:31:39.135522 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-hr84x" Dec 08 19:31:39 crc kubenswrapper[5125]: I1208 19:31:39.135764 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hr84x" Dec 08 19:31:39 crc kubenswrapper[5125]: I1208 19:31:39.173739 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hr84x" Dec 08 19:31:39 crc kubenswrapper[5125]: I1208 19:31:39.192346 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cnqn9" Dec 08 19:31:39 crc kubenswrapper[5125]: I1208 19:31:39.448997 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fgxfn" Dec 08 19:31:39 crc kubenswrapper[5125]: I1208 19:31:39.449276 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-fgxfn" Dec 08 19:31:39 crc kubenswrapper[5125]: I1208 19:31:39.484804 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fgxfn" Dec 08 19:31:39 crc kubenswrapper[5125]: I1208 19:31:39.776031 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d3d93c9-073e-4463-ad22-0dc846df2d84" path="/var/lib/kubelet/pods/7d3d93c9-073e-4463-ad22-0dc846df2d84/volumes" Dec 08 19:31:39 crc kubenswrapper[5125]: I1208 19:31:39.811962 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-692gr" Dec 08 19:31:39 crc kubenswrapper[5125]: I1208 19:31:39.812005 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-692gr" Dec 08 19:31:39 crc kubenswrapper[5125]: I1208 19:31:39.857272 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-692gr" Dec 08 19:31:40 crc kubenswrapper[5125]: I1208 19:31:40.190108 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-692gr" Dec 08 19:31:40 crc kubenswrapper[5125]: I1208 19:31:40.193015 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hr84x" Dec 08 19:31:40 crc kubenswrapper[5125]: I1208 19:31:40.196103 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fgxfn" Dec 08 19:31:40 crc kubenswrapper[5125]: I1208 19:31:40.225130 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hgkwp"] Dec 08 19:31:40 crc kubenswrapper[5125]: I1208 19:31:40.225415 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hgkwp" podUID="e29013b4-d624-4a56-804d-c5bf83a0db40" containerName="registry-server" containerID="cri-o://28baee5da304f768bfef8d0e818147c013ea98643cdcc78d9689579cc4b143b9" gracePeriod=2 Dec 08 19:31:40 crc kubenswrapper[5125]: I1208 19:31:40.417926 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mspl5"] Dec 08 19:31:40 crc kubenswrapper[5125]: I1208 19:31:40.418758 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mspl5" podUID="d85490a5-7e2e-41c2-8a79-fdfbe3767877" containerName="registry-server" containerID="cri-o://5c065c1ff06bec2a5fe660810e2bee9befdd59a1f9167440b1ec7116bfd199fb" gracePeriod=2 Dec 08 19:31:40 crc kubenswrapper[5125]: I1208 19:31:40.760835 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mspl5" Dec 08 19:31:40 crc kubenswrapper[5125]: I1208 19:31:40.850049 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d85490a5-7e2e-41c2-8a79-fdfbe3767877-catalog-content\") pod \"d85490a5-7e2e-41c2-8a79-fdfbe3767877\" (UID: \"d85490a5-7e2e-41c2-8a79-fdfbe3767877\") " Dec 08 19:31:40 crc kubenswrapper[5125]: I1208 19:31:40.850130 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d85490a5-7e2e-41c2-8a79-fdfbe3767877-utilities\") pod \"d85490a5-7e2e-41c2-8a79-fdfbe3767877\" (UID: \"d85490a5-7e2e-41c2-8a79-fdfbe3767877\") " Dec 08 19:31:40 crc kubenswrapper[5125]: I1208 19:31:40.850195 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcg5j\" (UniqueName: \"kubernetes.io/projected/d85490a5-7e2e-41c2-8a79-fdfbe3767877-kube-api-access-dcg5j\") pod \"d85490a5-7e2e-41c2-8a79-fdfbe3767877\" (UID: \"d85490a5-7e2e-41c2-8a79-fdfbe3767877\") " Dec 08 19:31:40 crc kubenswrapper[5125]: I1208 19:31:40.851397 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d85490a5-7e2e-41c2-8a79-fdfbe3767877-utilities" (OuterVolumeSpecName: "utilities") pod "d85490a5-7e2e-41c2-8a79-fdfbe3767877" (UID: "d85490a5-7e2e-41c2-8a79-fdfbe3767877"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:31:40 crc kubenswrapper[5125]: I1208 19:31:40.857130 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d85490a5-7e2e-41c2-8a79-fdfbe3767877-kube-api-access-dcg5j" (OuterVolumeSpecName: "kube-api-access-dcg5j") pod "d85490a5-7e2e-41c2-8a79-fdfbe3767877" (UID: "d85490a5-7e2e-41c2-8a79-fdfbe3767877"). InnerVolumeSpecName "kube-api-access-dcg5j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:31:40 crc kubenswrapper[5125]: I1208 19:31:40.906511 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d85490a5-7e2e-41c2-8a79-fdfbe3767877-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d85490a5-7e2e-41c2-8a79-fdfbe3767877" (UID: "d85490a5-7e2e-41c2-8a79-fdfbe3767877"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:31:40 crc kubenswrapper[5125]: I1208 19:31:40.951508 5125 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d85490a5-7e2e-41c2-8a79-fdfbe3767877-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:40 crc kubenswrapper[5125]: I1208 19:31:40.951545 5125 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d85490a5-7e2e-41c2-8a79-fdfbe3767877-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:40 crc kubenswrapper[5125]: I1208 19:31:40.951554 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dcg5j\" (UniqueName: \"kubernetes.io/projected/d85490a5-7e2e-41c2-8a79-fdfbe3767877-kube-api-access-dcg5j\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:41 crc kubenswrapper[5125]: I1208 19:31:41.158342 5125 generic.go:358] "Generic (PLEG): container finished" podID="d85490a5-7e2e-41c2-8a79-fdfbe3767877" containerID="5c065c1ff06bec2a5fe660810e2bee9befdd59a1f9167440b1ec7116bfd199fb" exitCode=0 Dec 08 19:31:41 crc kubenswrapper[5125]: I1208 19:31:41.158416 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mspl5" event={"ID":"d85490a5-7e2e-41c2-8a79-fdfbe3767877","Type":"ContainerDied","Data":"5c065c1ff06bec2a5fe660810e2bee9befdd59a1f9167440b1ec7116bfd199fb"} Dec 08 19:31:41 crc kubenswrapper[5125]: I1208 19:31:41.158452 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mspl5" Dec 08 19:31:41 crc kubenswrapper[5125]: I1208 19:31:41.158481 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mspl5" event={"ID":"d85490a5-7e2e-41c2-8a79-fdfbe3767877","Type":"ContainerDied","Data":"f5f6c3de587bcbaffbe3b613e6ce70268b35fcf07f32f162be4e066ba58054a7"} Dec 08 19:31:41 crc kubenswrapper[5125]: I1208 19:31:41.158506 5125 scope.go:117] "RemoveContainer" containerID="5c065c1ff06bec2a5fe660810e2bee9befdd59a1f9167440b1ec7116bfd199fb" Dec 08 19:31:41 crc kubenswrapper[5125]: I1208 19:31:41.161888 5125 generic.go:358] "Generic (PLEG): container finished" podID="e29013b4-d624-4a56-804d-c5bf83a0db40" containerID="28baee5da304f768bfef8d0e818147c013ea98643cdcc78d9689579cc4b143b9" exitCode=0 Dec 08 19:31:41 crc kubenswrapper[5125]: I1208 19:31:41.161988 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgkwp" event={"ID":"e29013b4-d624-4a56-804d-c5bf83a0db40","Type":"ContainerDied","Data":"28baee5da304f768bfef8d0e818147c013ea98643cdcc78d9689579cc4b143b9"} Dec 08 19:31:41 crc kubenswrapper[5125]: I1208 19:31:41.178002 5125 scope.go:117] "RemoveContainer" containerID="6c922c4dd34d4217d01179204f6bd3c649fb1fdaee2563c294f5d561fe44c84d" Dec 08 19:31:41 crc kubenswrapper[5125]: I1208 19:31:41.193162 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mspl5"] Dec 08 19:31:41 crc kubenswrapper[5125]: I1208 19:31:41.195657 5125 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mspl5"] Dec 08 19:31:41 crc kubenswrapper[5125]: I1208 19:31:41.216596 5125 scope.go:117] "RemoveContainer" containerID="8c20860a44b04cc509f74d62b5a7d6b88ea7fe00c2eddf8cfc763d0532be54cc" Dec 08 19:31:41 crc kubenswrapper[5125]: I1208 19:31:41.240706 5125 scope.go:117] "RemoveContainer" containerID="5c065c1ff06bec2a5fe660810e2bee9befdd59a1f9167440b1ec7116bfd199fb" Dec 08 19:31:41 crc kubenswrapper[5125]: E1208 19:31:41.241195 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c065c1ff06bec2a5fe660810e2bee9befdd59a1f9167440b1ec7116bfd199fb\": container with ID starting with 5c065c1ff06bec2a5fe660810e2bee9befdd59a1f9167440b1ec7116bfd199fb not found: ID does not exist" containerID="5c065c1ff06bec2a5fe660810e2bee9befdd59a1f9167440b1ec7116bfd199fb" Dec 08 19:31:41 crc kubenswrapper[5125]: I1208 19:31:41.241234 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c065c1ff06bec2a5fe660810e2bee9befdd59a1f9167440b1ec7116bfd199fb"} err="failed to get container status \"5c065c1ff06bec2a5fe660810e2bee9befdd59a1f9167440b1ec7116bfd199fb\": rpc error: code = NotFound desc = could not find container \"5c065c1ff06bec2a5fe660810e2bee9befdd59a1f9167440b1ec7116bfd199fb\": container with ID starting with 5c065c1ff06bec2a5fe660810e2bee9befdd59a1f9167440b1ec7116bfd199fb not found: ID does not exist" Dec 08 19:31:41 crc kubenswrapper[5125]: I1208 19:31:41.241259 5125 scope.go:117] "RemoveContainer" containerID="6c922c4dd34d4217d01179204f6bd3c649fb1fdaee2563c294f5d561fe44c84d" Dec 08 19:31:41 crc kubenswrapper[5125]: E1208 19:31:41.241964 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c922c4dd34d4217d01179204f6bd3c649fb1fdaee2563c294f5d561fe44c84d\": container with ID starting with 6c922c4dd34d4217d01179204f6bd3c649fb1fdaee2563c294f5d561fe44c84d not found: ID does not exist" containerID="6c922c4dd34d4217d01179204f6bd3c649fb1fdaee2563c294f5d561fe44c84d" Dec 08 19:31:41 crc kubenswrapper[5125]: I1208 19:31:41.241990 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c922c4dd34d4217d01179204f6bd3c649fb1fdaee2563c294f5d561fe44c84d"} err="failed to get container status \"6c922c4dd34d4217d01179204f6bd3c649fb1fdaee2563c294f5d561fe44c84d\": rpc error: code = NotFound desc = could not find container \"6c922c4dd34d4217d01179204f6bd3c649fb1fdaee2563c294f5d561fe44c84d\": container with ID starting with 6c922c4dd34d4217d01179204f6bd3c649fb1fdaee2563c294f5d561fe44c84d not found: ID does not exist" Dec 08 19:31:41 crc kubenswrapper[5125]: I1208 19:31:41.242005 5125 scope.go:117] "RemoveContainer" containerID="8c20860a44b04cc509f74d62b5a7d6b88ea7fe00c2eddf8cfc763d0532be54cc" Dec 08 19:31:41 crc kubenswrapper[5125]: E1208 19:31:41.242214 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c20860a44b04cc509f74d62b5a7d6b88ea7fe00c2eddf8cfc763d0532be54cc\": container with ID starting with 8c20860a44b04cc509f74d62b5a7d6b88ea7fe00c2eddf8cfc763d0532be54cc not found: ID does not exist" containerID="8c20860a44b04cc509f74d62b5a7d6b88ea7fe00c2eddf8cfc763d0532be54cc" Dec 08 19:31:41 crc kubenswrapper[5125]: I1208 19:31:41.242240 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c20860a44b04cc509f74d62b5a7d6b88ea7fe00c2eddf8cfc763d0532be54cc"} err="failed to get container status \"8c20860a44b04cc509f74d62b5a7d6b88ea7fe00c2eddf8cfc763d0532be54cc\": rpc error: code = NotFound desc = could not find container \"8c20860a44b04cc509f74d62b5a7d6b88ea7fe00c2eddf8cfc763d0532be54cc\": container with ID starting with 8c20860a44b04cc509f74d62b5a7d6b88ea7fe00c2eddf8cfc763d0532be54cc not found: ID does not exist" Dec 08 19:31:41 crc kubenswrapper[5125]: I1208 19:31:41.482787 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hgkwp" Dec 08 19:31:41 crc kubenswrapper[5125]: I1208 19:31:41.558300 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nrsv\" (UniqueName: \"kubernetes.io/projected/e29013b4-d624-4a56-804d-c5bf83a0db40-kube-api-access-9nrsv\") pod \"e29013b4-d624-4a56-804d-c5bf83a0db40\" (UID: \"e29013b4-d624-4a56-804d-c5bf83a0db40\") " Dec 08 19:31:41 crc kubenswrapper[5125]: I1208 19:31:41.558363 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e29013b4-d624-4a56-804d-c5bf83a0db40-catalog-content\") pod \"e29013b4-d624-4a56-804d-c5bf83a0db40\" (UID: \"e29013b4-d624-4a56-804d-c5bf83a0db40\") " Dec 08 19:31:41 crc kubenswrapper[5125]: I1208 19:31:41.558573 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e29013b4-d624-4a56-804d-c5bf83a0db40-utilities\") pod \"e29013b4-d624-4a56-804d-c5bf83a0db40\" (UID: \"e29013b4-d624-4a56-804d-c5bf83a0db40\") " Dec 08 19:31:41 crc kubenswrapper[5125]: I1208 19:31:41.559866 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e29013b4-d624-4a56-804d-c5bf83a0db40-utilities" (OuterVolumeSpecName: "utilities") pod "e29013b4-d624-4a56-804d-c5bf83a0db40" (UID: "e29013b4-d624-4a56-804d-c5bf83a0db40"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:31:41 crc kubenswrapper[5125]: I1208 19:31:41.563911 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e29013b4-d624-4a56-804d-c5bf83a0db40-kube-api-access-9nrsv" (OuterVolumeSpecName: "kube-api-access-9nrsv") pod "e29013b4-d624-4a56-804d-c5bf83a0db40" (UID: "e29013b4-d624-4a56-804d-c5bf83a0db40"). InnerVolumeSpecName "kube-api-access-9nrsv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:31:41 crc kubenswrapper[5125]: I1208 19:31:41.588115 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e29013b4-d624-4a56-804d-c5bf83a0db40-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e29013b4-d624-4a56-804d-c5bf83a0db40" (UID: "e29013b4-d624-4a56-804d-c5bf83a0db40"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:31:41 crc kubenswrapper[5125]: I1208 19:31:41.660584 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9nrsv\" (UniqueName: \"kubernetes.io/projected/e29013b4-d624-4a56-804d-c5bf83a0db40-kube-api-access-9nrsv\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:41 crc kubenswrapper[5125]: I1208 19:31:41.660630 5125 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e29013b4-d624-4a56-804d-c5bf83a0db40-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:41 crc kubenswrapper[5125]: I1208 19:31:41.660639 5125 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e29013b4-d624-4a56-804d-c5bf83a0db40-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:41 crc kubenswrapper[5125]: I1208 19:31:41.774210 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d85490a5-7e2e-41c2-8a79-fdfbe3767877" path="/var/lib/kubelet/pods/d85490a5-7e2e-41c2-8a79-fdfbe3767877/volumes" Dec 08 19:31:42 crc kubenswrapper[5125]: I1208 19:31:42.171311 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgkwp" event={"ID":"e29013b4-d624-4a56-804d-c5bf83a0db40","Type":"ContainerDied","Data":"2fac44e32fafd8e36e0f913fa2e04e05ce7e2d4a76298d4523c35a2b96214661"} Dec 08 19:31:42 crc kubenswrapper[5125]: I1208 19:31:42.171594 5125 scope.go:117] "RemoveContainer" containerID="28baee5da304f768bfef8d0e818147c013ea98643cdcc78d9689579cc4b143b9" Dec 08 19:31:42 crc kubenswrapper[5125]: I1208 19:31:42.171853 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hgkwp" Dec 08 19:31:42 crc kubenswrapper[5125]: I1208 19:31:42.192239 5125 scope.go:117] "RemoveContainer" containerID="7069a8af77aa4c34a7312dbad524361c858b0400ad0be8b1fb5beae0983614fe" Dec 08 19:31:42 crc kubenswrapper[5125]: I1208 19:31:42.192976 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hgkwp"] Dec 08 19:31:42 crc kubenswrapper[5125]: I1208 19:31:42.197462 5125 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hgkwp"] Dec 08 19:31:42 crc kubenswrapper[5125]: I1208 19:31:42.206751 5125 scope.go:117] "RemoveContainer" containerID="2adb90be8b3ca0d06e4d770c00339a23aad32939bc17c696741aa21d4cfd3245" Dec 08 19:31:42 crc kubenswrapper[5125]: I1208 19:31:42.263128 5125 ???:1] "http: TLS handshake error from 192.168.126.11:32946: no serving certificate available for the kubelet" Dec 08 19:31:42 crc kubenswrapper[5125]: I1208 19:31:42.619454 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hr84x"] Dec 08 19:31:42 crc kubenswrapper[5125]: I1208 19:31:42.818362 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-692gr"] Dec 08 19:31:42 crc kubenswrapper[5125]: I1208 19:31:42.818632 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-692gr" podUID="7f0e14e4-e9cb-4056-a6bb-320825a7a069" containerName="registry-server" containerID="cri-o://146d7b3f8a4beacbb9cbf12333032fde5cc05be086e8c0df72f7e18f5eed9831" gracePeriod=2 Dec 08 19:31:43 crc kubenswrapper[5125]: I1208 19:31:43.178833 5125 generic.go:358] "Generic (PLEG): container finished" podID="7f0e14e4-e9cb-4056-a6bb-320825a7a069" containerID="146d7b3f8a4beacbb9cbf12333032fde5cc05be086e8c0df72f7e18f5eed9831" exitCode=0 Dec 08 19:31:43 crc kubenswrapper[5125]: I1208 19:31:43.178906 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-692gr" event={"ID":"7f0e14e4-e9cb-4056-a6bb-320825a7a069","Type":"ContainerDied","Data":"146d7b3f8a4beacbb9cbf12333032fde5cc05be086e8c0df72f7e18f5eed9831"} Dec 08 19:31:43 crc kubenswrapper[5125]: I1208 19:31:43.179435 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hr84x" podUID="d657b632-26f0-4a12-8012-69b9adcdfb4d" containerName="registry-server" containerID="cri-o://d31595c4b2f3720890d6fe04de11652bfd185eac174766712aca3de5037239b6" gracePeriod=2 Dec 08 19:31:43 crc kubenswrapper[5125]: I1208 19:31:43.351497 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-2wvch"] Dec 08 19:31:43 crc kubenswrapper[5125]: I1208 19:31:43.774070 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e29013b4-d624-4a56-804d-c5bf83a0db40" path="/var/lib/kubelet/pods/e29013b4-d624-4a56-804d-c5bf83a0db40/volumes" Dec 08 19:31:44 crc kubenswrapper[5125]: I1208 19:31:44.185023 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-692gr" event={"ID":"7f0e14e4-e9cb-4056-a6bb-320825a7a069","Type":"ContainerDied","Data":"a95f99fa612a0aa5e5f3012c9739484f6c76b46d7604b76de1c0f0399369630e"} Dec 08 19:31:44 crc kubenswrapper[5125]: I1208 19:31:44.185342 5125 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a95f99fa612a0aa5e5f3012c9739484f6c76b46d7604b76de1c0f0399369630e" Dec 08 19:31:44 crc kubenswrapper[5125]: I1208 19:31:44.186760 5125 generic.go:358] "Generic (PLEG): container finished" podID="d657b632-26f0-4a12-8012-69b9adcdfb4d" containerID="d31595c4b2f3720890d6fe04de11652bfd185eac174766712aca3de5037239b6" exitCode=0 Dec 08 19:31:44 crc kubenswrapper[5125]: I1208 19:31:44.186792 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hr84x" event={"ID":"d657b632-26f0-4a12-8012-69b9adcdfb4d","Type":"ContainerDied","Data":"d31595c4b2f3720890d6fe04de11652bfd185eac174766712aca3de5037239b6"} Dec 08 19:31:44 crc kubenswrapper[5125]: I1208 19:31:44.235930 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-692gr" Dec 08 19:31:44 crc kubenswrapper[5125]: I1208 19:31:44.239770 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hr84x" Dec 08 19:31:44 crc kubenswrapper[5125]: I1208 19:31:44.293025 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f0e14e4-e9cb-4056-a6bb-320825a7a069-utilities\") pod \"7f0e14e4-e9cb-4056-a6bb-320825a7a069\" (UID: \"7f0e14e4-e9cb-4056-a6bb-320825a7a069\") " Dec 08 19:31:44 crc kubenswrapper[5125]: I1208 19:31:44.293323 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq5d4\" (UniqueName: \"kubernetes.io/projected/d657b632-26f0-4a12-8012-69b9adcdfb4d-kube-api-access-xq5d4\") pod \"d657b632-26f0-4a12-8012-69b9adcdfb4d\" (UID: \"d657b632-26f0-4a12-8012-69b9adcdfb4d\") " Dec 08 19:31:44 crc kubenswrapper[5125]: I1208 19:31:44.293508 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nznl\" (UniqueName: \"kubernetes.io/projected/7f0e14e4-e9cb-4056-a6bb-320825a7a069-kube-api-access-7nznl\") pod \"7f0e14e4-e9cb-4056-a6bb-320825a7a069\" (UID: \"7f0e14e4-e9cb-4056-a6bb-320825a7a069\") " Dec 08 19:31:44 crc kubenswrapper[5125]: I1208 19:31:44.293593 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d657b632-26f0-4a12-8012-69b9adcdfb4d-catalog-content\") pod \"d657b632-26f0-4a12-8012-69b9adcdfb4d\" (UID: \"d657b632-26f0-4a12-8012-69b9adcdfb4d\") " Dec 08 19:31:44 crc kubenswrapper[5125]: I1208 19:31:44.293701 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d657b632-26f0-4a12-8012-69b9adcdfb4d-utilities\") pod \"d657b632-26f0-4a12-8012-69b9adcdfb4d\" (UID: \"d657b632-26f0-4a12-8012-69b9adcdfb4d\") " Dec 08 19:31:44 crc kubenswrapper[5125]: I1208 19:31:44.293816 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f0e14e4-e9cb-4056-a6bb-320825a7a069-catalog-content\") pod \"7f0e14e4-e9cb-4056-a6bb-320825a7a069\" (UID: \"7f0e14e4-e9cb-4056-a6bb-320825a7a069\") " Dec 08 19:31:44 crc kubenswrapper[5125]: I1208 19:31:44.294284 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f0e14e4-e9cb-4056-a6bb-320825a7a069-utilities" (OuterVolumeSpecName: "utilities") pod "7f0e14e4-e9cb-4056-a6bb-320825a7a069" (UID: "7f0e14e4-e9cb-4056-a6bb-320825a7a069"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:31:44 crc kubenswrapper[5125]: I1208 19:31:44.295667 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d657b632-26f0-4a12-8012-69b9adcdfb4d-utilities" (OuterVolumeSpecName: "utilities") pod "d657b632-26f0-4a12-8012-69b9adcdfb4d" (UID: "d657b632-26f0-4a12-8012-69b9adcdfb4d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:31:44 crc kubenswrapper[5125]: I1208 19:31:44.300454 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f0e14e4-e9cb-4056-a6bb-320825a7a069-kube-api-access-7nznl" (OuterVolumeSpecName: "kube-api-access-7nznl") pod "7f0e14e4-e9cb-4056-a6bb-320825a7a069" (UID: "7f0e14e4-e9cb-4056-a6bb-320825a7a069"). InnerVolumeSpecName "kube-api-access-7nznl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:31:44 crc kubenswrapper[5125]: I1208 19:31:44.300905 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d657b632-26f0-4a12-8012-69b9adcdfb4d-kube-api-access-xq5d4" (OuterVolumeSpecName: "kube-api-access-xq5d4") pod "d657b632-26f0-4a12-8012-69b9adcdfb4d" (UID: "d657b632-26f0-4a12-8012-69b9adcdfb4d"). InnerVolumeSpecName "kube-api-access-xq5d4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:31:44 crc kubenswrapper[5125]: I1208 19:31:44.311577 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d657b632-26f0-4a12-8012-69b9adcdfb4d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d657b632-26f0-4a12-8012-69b9adcdfb4d" (UID: "d657b632-26f0-4a12-8012-69b9adcdfb4d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:31:44 crc kubenswrapper[5125]: I1208 19:31:44.389713 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f0e14e4-e9cb-4056-a6bb-320825a7a069-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f0e14e4-e9cb-4056-a6bb-320825a7a069" (UID: "7f0e14e4-e9cb-4056-a6bb-320825a7a069"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:31:44 crc kubenswrapper[5125]: I1208 19:31:44.395328 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xq5d4\" (UniqueName: \"kubernetes.io/projected/d657b632-26f0-4a12-8012-69b9adcdfb4d-kube-api-access-xq5d4\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:44 crc kubenswrapper[5125]: I1208 19:31:44.395363 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7nznl\" (UniqueName: \"kubernetes.io/projected/7f0e14e4-e9cb-4056-a6bb-320825a7a069-kube-api-access-7nznl\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:44 crc kubenswrapper[5125]: I1208 19:31:44.395374 5125 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d657b632-26f0-4a12-8012-69b9adcdfb4d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:44 crc kubenswrapper[5125]: I1208 19:31:44.395384 5125 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d657b632-26f0-4a12-8012-69b9adcdfb4d-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:44 crc kubenswrapper[5125]: I1208 19:31:44.395393 5125 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f0e14e4-e9cb-4056-a6bb-320825a7a069-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:44 crc kubenswrapper[5125]: I1208 19:31:44.395401 5125 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f0e14e4-e9cb-4056-a6bb-320825a7a069-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.193640 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hr84x" event={"ID":"d657b632-26f0-4a12-8012-69b9adcdfb4d","Type":"ContainerDied","Data":"6c5b8b8064299a07352793c68b4cbbfa557d937cd66056c7ab6bc45f7c41e6ca"} Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.193713 5125 scope.go:117] "RemoveContainer" containerID="d31595c4b2f3720890d6fe04de11652bfd185eac174766712aca3de5037239b6" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.193778 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-692gr" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.193711 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hr84x" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.211032 5125 scope.go:117] "RemoveContainer" containerID="e718023362befb3d2a6f322a96806656d93cfaba43d1dc3534a638706b2a7ca7" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.226359 5125 scope.go:117] "RemoveContainer" containerID="0e92ab16351184b0c5799a81b8430bc93500cab1e65205ad9a630a42113c38aa" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.227007 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hr84x"] Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.233256 5125 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hr84x"] Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.236929 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-692gr"] Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.247358 5125 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-692gr"] Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.388430 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.388970 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7a439f4a-1b17-4e9d-a90c-c9278ed75bae" containerName="pruner" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.388982 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a439f4a-1b17-4e9d-a90c-c9278ed75bae" containerName="pruner" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.388995 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7f0e14e4-e9cb-4056-a6bb-320825a7a069" containerName="extract-content" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389001 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f0e14e4-e9cb-4056-a6bb-320825a7a069" containerName="extract-content" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389010 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d85490a5-7e2e-41c2-8a79-fdfbe3767877" containerName="registry-server" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389015 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="d85490a5-7e2e-41c2-8a79-fdfbe3767877" containerName="registry-server" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389025 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8" containerName="pruner" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389031 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8" containerName="pruner" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389038 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7f0e14e4-e9cb-4056-a6bb-320825a7a069" containerName="registry-server" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389044 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f0e14e4-e9cb-4056-a6bb-320825a7a069" containerName="registry-server" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389053 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d657b632-26f0-4a12-8012-69b9adcdfb4d" containerName="extract-utilities" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389058 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="d657b632-26f0-4a12-8012-69b9adcdfb4d" containerName="extract-utilities" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389065 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7f0e14e4-e9cb-4056-a6bb-320825a7a069" containerName="extract-utilities" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389070 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f0e14e4-e9cb-4056-a6bb-320825a7a069" containerName="extract-utilities" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389076 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e29013b4-d624-4a56-804d-c5bf83a0db40" containerName="extract-content" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389082 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="e29013b4-d624-4a56-804d-c5bf83a0db40" containerName="extract-content" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389090 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d657b632-26f0-4a12-8012-69b9adcdfb4d" containerName="extract-content" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389095 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="d657b632-26f0-4a12-8012-69b9adcdfb4d" containerName="extract-content" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389103 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d657b632-26f0-4a12-8012-69b9adcdfb4d" containerName="registry-server" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389108 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="d657b632-26f0-4a12-8012-69b9adcdfb4d" containerName="registry-server" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389115 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7d3d93c9-073e-4463-ad22-0dc846df2d84" containerName="kube-multus-additional-cni-plugins" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389121 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d3d93c9-073e-4463-ad22-0dc846df2d84" containerName="kube-multus-additional-cni-plugins" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389130 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e29013b4-d624-4a56-804d-c5bf83a0db40" containerName="extract-utilities" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389136 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="e29013b4-d624-4a56-804d-c5bf83a0db40" containerName="extract-utilities" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389149 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d85490a5-7e2e-41c2-8a79-fdfbe3767877" containerName="extract-utilities" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389154 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="d85490a5-7e2e-41c2-8a79-fdfbe3767877" containerName="extract-utilities" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389163 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d85490a5-7e2e-41c2-8a79-fdfbe3767877" containerName="extract-content" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389168 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="d85490a5-7e2e-41c2-8a79-fdfbe3767877" containerName="extract-content" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389178 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e29013b4-d624-4a56-804d-c5bf83a0db40" containerName="registry-server" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389184 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="e29013b4-d624-4a56-804d-c5bf83a0db40" containerName="registry-server" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389257 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="7a439f4a-1b17-4e9d-a90c-c9278ed75bae" containerName="pruner" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389264 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="d657b632-26f0-4a12-8012-69b9adcdfb4d" containerName="registry-server" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389271 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="e29013b4-d624-4a56-804d-c5bf83a0db40" containerName="registry-server" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389280 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="d85490a5-7e2e-41c2-8a79-fdfbe3767877" containerName="registry-server" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389288 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="7f0e14e4-e9cb-4056-a6bb-320825a7a069" containerName="registry-server" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389297 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="7d3d93c9-073e-4463-ad22-0dc846df2d84" containerName="kube-multus-additional-cni-plugins" Dec 08 19:31:45 crc kubenswrapper[5125]: I1208 19:31:45.389306 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="4a27c6c9-a5bc-4428-ab50-3c5c7547a6e8" containerName="pruner" Dec 08 19:31:48 crc kubenswrapper[5125]: I1208 19:31:48.530694 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 19:31:48 crc kubenswrapper[5125]: I1208 19:31:48.538332 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f0e14e4-e9cb-4056-a6bb-320825a7a069" path="/var/lib/kubelet/pods/7f0e14e4-e9cb-4056-a6bb-320825a7a069/volumes" Dec 08 19:31:48 crc kubenswrapper[5125]: I1208 19:31:48.539289 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d657b632-26f0-4a12-8012-69b9adcdfb4d" path="/var/lib/kubelet/pods/d657b632-26f0-4a12-8012-69b9adcdfb4d/volumes" Dec 08 19:31:48 crc kubenswrapper[5125]: I1208 19:31:48.539919 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 08 19:31:48 crc kubenswrapper[5125]: I1208 19:31:48.542816 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 08 19:31:48 crc kubenswrapper[5125]: I1208 19:31:48.549781 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 08 19:31:48 crc kubenswrapper[5125]: I1208 19:31:48.655596 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d52625b-a1ed-4f9b-a145-594738c8d662-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"8d52625b-a1ed-4f9b-a145-594738c8d662\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 19:31:48 crc kubenswrapper[5125]: I1208 19:31:48.655681 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d52625b-a1ed-4f9b-a145-594738c8d662-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"8d52625b-a1ed-4f9b-a145-594738c8d662\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 19:31:48 crc kubenswrapper[5125]: I1208 19:31:48.756634 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d52625b-a1ed-4f9b-a145-594738c8d662-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"8d52625b-a1ed-4f9b-a145-594738c8d662\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 19:31:48 crc kubenswrapper[5125]: I1208 19:31:48.756690 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d52625b-a1ed-4f9b-a145-594738c8d662-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"8d52625b-a1ed-4f9b-a145-594738c8d662\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 19:31:48 crc kubenswrapper[5125]: I1208 19:31:48.756790 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d52625b-a1ed-4f9b-a145-594738c8d662-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"8d52625b-a1ed-4f9b-a145-594738c8d662\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 19:31:48 crc kubenswrapper[5125]: I1208 19:31:48.779468 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d52625b-a1ed-4f9b-a145-594738c8d662-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"8d52625b-a1ed-4f9b-a145-594738c8d662\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 19:31:48 crc kubenswrapper[5125]: I1208 19:31:48.854624 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 19:31:49 crc kubenswrapper[5125]: I1208 19:31:49.314887 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 08 19:31:49 crc kubenswrapper[5125]: W1208 19:31:49.318958 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8d52625b_a1ed_4f9b_a145_594738c8d662.slice/crio-0eea978e9bf553edb4b8d337f699b841a57bf2e5babb0fd799cb20594469fb73 WatchSource:0}: Error finding container 0eea978e9bf553edb4b8d337f699b841a57bf2e5babb0fd799cb20594469fb73: Status 404 returned error can't find the container with id 0eea978e9bf553edb4b8d337f699b841a57bf2e5babb0fd799cb20594469fb73 Dec 08 19:31:50 crc kubenswrapper[5125]: I1208 19:31:50.220376 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"8d52625b-a1ed-4f9b-a145-594738c8d662","Type":"ContainerStarted","Data":"ff94138cc425624ba47994b857538ee9f7af579e9b3d205bdd7d289169459270"} Dec 08 19:31:50 crc kubenswrapper[5125]: I1208 19:31:50.220742 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"8d52625b-a1ed-4f9b-a145-594738c8d662","Type":"ContainerStarted","Data":"0eea978e9bf553edb4b8d337f699b841a57bf2e5babb0fd799cb20594469fb73"} Dec 08 19:31:50 crc kubenswrapper[5125]: I1208 19:31:50.236567 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=5.236547769 podStartE2EDuration="5.236547769s" podCreationTimestamp="2025-12-08 19:31:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:50.236042485 +0000 UTC m=+167.006532769" watchObservedRunningTime="2025-12-08 19:31:50.236547769 +0000 UTC m=+167.007038053" Dec 08 19:31:51 crc kubenswrapper[5125]: I1208 19:31:51.226530 5125 generic.go:358] "Generic (PLEG): container finished" podID="8d52625b-a1ed-4f9b-a145-594738c8d662" containerID="ff94138cc425624ba47994b857538ee9f7af579e9b3d205bdd7d289169459270" exitCode=0 Dec 08 19:31:51 crc kubenswrapper[5125]: I1208 19:31:51.226644 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"8d52625b-a1ed-4f9b-a145-594738c8d662","Type":"ContainerDied","Data":"ff94138cc425624ba47994b857538ee9f7af579e9b3d205bdd7d289169459270"} Dec 08 19:31:52 crc kubenswrapper[5125]: I1208 19:31:52.425527 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 19:31:52 crc kubenswrapper[5125]: I1208 19:31:52.501504 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d52625b-a1ed-4f9b-a145-594738c8d662-kube-api-access\") pod \"8d52625b-a1ed-4f9b-a145-594738c8d662\" (UID: \"8d52625b-a1ed-4f9b-a145-594738c8d662\") " Dec 08 19:31:52 crc kubenswrapper[5125]: I1208 19:31:52.501578 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d52625b-a1ed-4f9b-a145-594738c8d662-kubelet-dir\") pod \"8d52625b-a1ed-4f9b-a145-594738c8d662\" (UID: \"8d52625b-a1ed-4f9b-a145-594738c8d662\") " Dec 08 19:31:52 crc kubenswrapper[5125]: I1208 19:31:52.501863 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d52625b-a1ed-4f9b-a145-594738c8d662-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8d52625b-a1ed-4f9b-a145-594738c8d662" (UID: "8d52625b-a1ed-4f9b-a145-594738c8d662"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:31:52 crc kubenswrapper[5125]: I1208 19:31:52.508778 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d52625b-a1ed-4f9b-a145-594738c8d662-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8d52625b-a1ed-4f9b-a145-594738c8d662" (UID: "8d52625b-a1ed-4f9b-a145-594738c8d662"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:31:52 crc kubenswrapper[5125]: I1208 19:31:52.602700 5125 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d52625b-a1ed-4f9b-a145-594738c8d662-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:52 crc kubenswrapper[5125]: I1208 19:31:52.602747 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d52625b-a1ed-4f9b-a145-594738c8d662-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 19:31:53 crc kubenswrapper[5125]: I1208 19:31:53.185927 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 08 19:31:53 crc kubenswrapper[5125]: I1208 19:31:53.186485 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8d52625b-a1ed-4f9b-a145-594738c8d662" containerName="pruner" Dec 08 19:31:53 crc kubenswrapper[5125]: I1208 19:31:53.186503 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d52625b-a1ed-4f9b-a145-594738c8d662" containerName="pruner" Dec 08 19:31:53 crc kubenswrapper[5125]: I1208 19:31:53.186587 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="8d52625b-a1ed-4f9b-a145-594738c8d662" containerName="pruner" Dec 08 19:31:54 crc kubenswrapper[5125]: I1208 19:31:54.725604 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 08 19:31:54 crc kubenswrapper[5125]: I1208 19:31:54.725933 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"8d52625b-a1ed-4f9b-a145-594738c8d662","Type":"ContainerDied","Data":"0eea978e9bf553edb4b8d337f699b841a57bf2e5babb0fd799cb20594469fb73"} Dec 08 19:31:54 crc kubenswrapper[5125]: I1208 19:31:54.725964 5125 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0eea978e9bf553edb4b8d337f699b841a57bf2e5babb0fd799cb20594469fb73" Dec 08 19:31:54 crc kubenswrapper[5125]: I1208 19:31:54.725746 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 19:31:54 crc kubenswrapper[5125]: I1208 19:31:54.725868 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:31:54 crc kubenswrapper[5125]: I1208 19:31:54.829196 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d84464a9-ebd2-4e20-8196-6d468034e0cc-kubelet-dir\") pod \"installer-12-crc\" (UID: \"d84464a9-ebd2-4e20-8196-6d468034e0cc\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:31:54 crc kubenswrapper[5125]: I1208 19:31:54.829253 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d84464a9-ebd2-4e20-8196-6d468034e0cc-kube-api-access\") pod \"installer-12-crc\" (UID: \"d84464a9-ebd2-4e20-8196-6d468034e0cc\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:31:54 crc kubenswrapper[5125]: I1208 19:31:54.829528 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d84464a9-ebd2-4e20-8196-6d468034e0cc-var-lock\") pod \"installer-12-crc\" (UID: \"d84464a9-ebd2-4e20-8196-6d468034e0cc\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:31:54 crc kubenswrapper[5125]: I1208 19:31:54.931083 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d84464a9-ebd2-4e20-8196-6d468034e0cc-var-lock\") pod \"installer-12-crc\" (UID: \"d84464a9-ebd2-4e20-8196-6d468034e0cc\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:31:54 crc kubenswrapper[5125]: I1208 19:31:54.931174 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d84464a9-ebd2-4e20-8196-6d468034e0cc-kubelet-dir\") pod \"installer-12-crc\" (UID: \"d84464a9-ebd2-4e20-8196-6d468034e0cc\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:31:54 crc kubenswrapper[5125]: I1208 19:31:54.931208 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d84464a9-ebd2-4e20-8196-6d468034e0cc-kube-api-access\") pod \"installer-12-crc\" (UID: \"d84464a9-ebd2-4e20-8196-6d468034e0cc\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:31:54 crc kubenswrapper[5125]: I1208 19:31:54.931625 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d84464a9-ebd2-4e20-8196-6d468034e0cc-var-lock\") pod \"installer-12-crc\" (UID: \"d84464a9-ebd2-4e20-8196-6d468034e0cc\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:31:54 crc kubenswrapper[5125]: I1208 19:31:54.931643 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d84464a9-ebd2-4e20-8196-6d468034e0cc-kubelet-dir\") pod \"installer-12-crc\" (UID: \"d84464a9-ebd2-4e20-8196-6d468034e0cc\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:31:54 crc kubenswrapper[5125]: I1208 19:31:54.956247 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d84464a9-ebd2-4e20-8196-6d468034e0cc-kube-api-access\") pod \"installer-12-crc\" (UID: \"d84464a9-ebd2-4e20-8196-6d468034e0cc\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:31:55 crc kubenswrapper[5125]: I1208 19:31:55.057002 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:31:55 crc kubenswrapper[5125]: I1208 19:31:55.278998 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 08 19:31:56 crc kubenswrapper[5125]: I1208 19:31:56.276740 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"d84464a9-ebd2-4e20-8196-6d468034e0cc","Type":"ContainerStarted","Data":"d15fdef3f46634f7354199904baf3174702ad43607bb04d73e8819d54b4bc418"} Dec 08 19:31:56 crc kubenswrapper[5125]: I1208 19:31:56.277040 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"d84464a9-ebd2-4e20-8196-6d468034e0cc","Type":"ContainerStarted","Data":"98d11b3be6b7862a40f067690f7f75fe12e28d51209d4041466ed261ef9e3742"} Dec 08 19:32:02 crc kubenswrapper[5125]: I1208 19:32:02.109058 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 19:32:02 crc kubenswrapper[5125]: I1208 19:32:02.132430 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=9.132406863 podStartE2EDuration="9.132406863s" podCreationTimestamp="2025-12-08 19:31:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:31:56.297509733 +0000 UTC m=+173.068000067" watchObservedRunningTime="2025-12-08 19:32:02.132406863 +0000 UTC m=+178.902897157" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.384967 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" podUID="cdb7a298-ac30-410b-9ab7-a060a428e88b" containerName="oauth-openshift" containerID="cri-o://18077ee2d09c52b4f773f68b9c88e0c9f6e8ae990b8d07b94a0e92eeb4b42499" gracePeriod=15 Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.733411 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.769049 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7"] Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.769673 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cdb7a298-ac30-410b-9ab7-a060a428e88b" containerName="oauth-openshift" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.769694 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdb7a298-ac30-410b-9ab7-a060a428e88b" containerName="oauth-openshift" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.769833 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="cdb7a298-ac30-410b-9ab7-a060a428e88b" containerName="oauth-openshift" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.774670 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.786458 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7"] Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.920855 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-user-template-provider-selection\") pod \"cdb7a298-ac30-410b-9ab7-a060a428e88b\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.921249 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-user-template-login\") pod \"cdb7a298-ac30-410b-9ab7-a060a428e88b\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.921286 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-user-idp-0-file-data\") pod \"cdb7a298-ac30-410b-9ab7-a060a428e88b\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.921332 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cdb7a298-ac30-410b-9ab7-a060a428e88b-audit-dir\") pod \"cdb7a298-ac30-410b-9ab7-a060a428e88b\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.921388 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdb7a298-ac30-410b-9ab7-a060a428e88b-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "cdb7a298-ac30-410b-9ab7-a060a428e88b" (UID: "cdb7a298-ac30-410b-9ab7-a060a428e88b"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.921570 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-service-ca\") pod \"cdb7a298-ac30-410b-9ab7-a060a428e88b\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.921784 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-trusted-ca-bundle\") pod \"cdb7a298-ac30-410b-9ab7-a060a428e88b\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.921817 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-session\") pod \"cdb7a298-ac30-410b-9ab7-a060a428e88b\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.921844 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-cliconfig\") pod \"cdb7a298-ac30-410b-9ab7-a060a428e88b\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.921862 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22br8\" (UniqueName: \"kubernetes.io/projected/cdb7a298-ac30-410b-9ab7-a060a428e88b-kube-api-access-22br8\") pod \"cdb7a298-ac30-410b-9ab7-a060a428e88b\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.921883 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-serving-cert\") pod \"cdb7a298-ac30-410b-9ab7-a060a428e88b\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.921922 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-ocp-branding-template\") pod \"cdb7a298-ac30-410b-9ab7-a060a428e88b\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.921956 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cdb7a298-ac30-410b-9ab7-a060a428e88b-audit-policies\") pod \"cdb7a298-ac30-410b-9ab7-a060a428e88b\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.921981 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-router-certs\") pod \"cdb7a298-ac30-410b-9ab7-a060a428e88b\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.922023 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-user-template-error\") pod \"cdb7a298-ac30-410b-9ab7-a060a428e88b\" (UID: \"cdb7a298-ac30-410b-9ab7-a060a428e88b\") " Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.922141 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d51e2531-2b5e-46c6-80d4-1f408538957f-v4-0-config-user-template-login\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.922169 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d51e2531-2b5e-46c6-80d4-1f408538957f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.922194 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d51e2531-2b5e-46c6-80d4-1f408538957f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.922184 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "cdb7a298-ac30-410b-9ab7-a060a428e88b" (UID: "cdb7a298-ac30-410b-9ab7-a060a428e88b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.922213 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d51e2531-2b5e-46c6-80d4-1f408538957f-audit-dir\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.922249 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d51e2531-2b5e-46c6-80d4-1f408538957f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.922270 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d51e2531-2b5e-46c6-80d4-1f408538957f-v4-0-config-system-router-certs\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.922290 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d51e2531-2b5e-46c6-80d4-1f408538957f-v4-0-config-system-service-ca\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.922361 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d51e2531-2b5e-46c6-80d4-1f408538957f-audit-policies\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.922380 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d51e2531-2b5e-46c6-80d4-1f408538957f-v4-0-config-system-session\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.922399 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d51e2531-2b5e-46c6-80d4-1f408538957f-v4-0-config-user-template-error\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.922417 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d51e2531-2b5e-46c6-80d4-1f408538957f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.922433 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv4jp\" (UniqueName: \"kubernetes.io/projected/d51e2531-2b5e-46c6-80d4-1f408538957f-kube-api-access-vv4jp\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.922480 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d51e2531-2b5e-46c6-80d4-1f408538957f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.922524 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d51e2531-2b5e-46c6-80d4-1f408538957f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.922573 5125 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cdb7a298-ac30-410b-9ab7-a060a428e88b-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.922584 5125 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.922689 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "cdb7a298-ac30-410b-9ab7-a060a428e88b" (UID: "cdb7a298-ac30-410b-9ab7-a060a428e88b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.923899 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "cdb7a298-ac30-410b-9ab7-a060a428e88b" (UID: "cdb7a298-ac30-410b-9ab7-a060a428e88b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.924313 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdb7a298-ac30-410b-9ab7-a060a428e88b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "cdb7a298-ac30-410b-9ab7-a060a428e88b" (UID: "cdb7a298-ac30-410b-9ab7-a060a428e88b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.927546 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "cdb7a298-ac30-410b-9ab7-a060a428e88b" (UID: "cdb7a298-ac30-410b-9ab7-a060a428e88b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.928018 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "cdb7a298-ac30-410b-9ab7-a060a428e88b" (UID: "cdb7a298-ac30-410b-9ab7-a060a428e88b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.928204 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "cdb7a298-ac30-410b-9ab7-a060a428e88b" (UID: "cdb7a298-ac30-410b-9ab7-a060a428e88b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.928383 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "cdb7a298-ac30-410b-9ab7-a060a428e88b" (UID: "cdb7a298-ac30-410b-9ab7-a060a428e88b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.928779 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdb7a298-ac30-410b-9ab7-a060a428e88b-kube-api-access-22br8" (OuterVolumeSpecName: "kube-api-access-22br8") pod "cdb7a298-ac30-410b-9ab7-a060a428e88b" (UID: "cdb7a298-ac30-410b-9ab7-a060a428e88b"). InnerVolumeSpecName "kube-api-access-22br8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.928830 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "cdb7a298-ac30-410b-9ab7-a060a428e88b" (UID: "cdb7a298-ac30-410b-9ab7-a060a428e88b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.929187 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "cdb7a298-ac30-410b-9ab7-a060a428e88b" (UID: "cdb7a298-ac30-410b-9ab7-a060a428e88b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.929245 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "cdb7a298-ac30-410b-9ab7-a060a428e88b" (UID: "cdb7a298-ac30-410b-9ab7-a060a428e88b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:32:08 crc kubenswrapper[5125]: I1208 19:32:08.929433 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "cdb7a298-ac30-410b-9ab7-a060a428e88b" (UID: "cdb7a298-ac30-410b-9ab7-a060a428e88b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.023463 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d51e2531-2b5e-46c6-80d4-1f408538957f-audit-policies\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.023515 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d51e2531-2b5e-46c6-80d4-1f408538957f-v4-0-config-system-session\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.023710 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d51e2531-2b5e-46c6-80d4-1f408538957f-v4-0-config-user-template-error\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.023795 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d51e2531-2b5e-46c6-80d4-1f408538957f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.023822 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vv4jp\" (UniqueName: \"kubernetes.io/projected/d51e2531-2b5e-46c6-80d4-1f408538957f-kube-api-access-vv4jp\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.023908 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d51e2531-2b5e-46c6-80d4-1f408538957f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.023977 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d51e2531-2b5e-46c6-80d4-1f408538957f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.024041 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d51e2531-2b5e-46c6-80d4-1f408538957f-v4-0-config-user-template-login\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.024080 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d51e2531-2b5e-46c6-80d4-1f408538957f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.024121 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d51e2531-2b5e-46c6-80d4-1f408538957f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.024149 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d51e2531-2b5e-46c6-80d4-1f408538957f-audit-dir\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.024193 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d51e2531-2b5e-46c6-80d4-1f408538957f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.024224 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d51e2531-2b5e-46c6-80d4-1f408538957f-v4-0-config-system-router-certs\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.024260 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d51e2531-2b5e-46c6-80d4-1f408538957f-v4-0-config-system-service-ca\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.024536 5125 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cdb7a298-ac30-410b-9ab7-a060a428e88b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.024684 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d51e2531-2b5e-46c6-80d4-1f408538957f-audit-dir\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.024785 5125 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.024836 5125 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.024845 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d51e2531-2b5e-46c6-80d4-1f408538957f-audit-policies\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.024854 5125 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.024871 5125 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.024887 5125 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.024909 5125 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.024924 5125 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.024939 5125 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.024954 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-22br8\" (UniqueName: \"kubernetes.io/projected/cdb7a298-ac30-410b-9ab7-a060a428e88b-kube-api-access-22br8\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.024966 5125 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.024979 5125 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cdb7a298-ac30-410b-9ab7-a060a428e88b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.025273 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d51e2531-2b5e-46c6-80d4-1f408538957f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.025734 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d51e2531-2b5e-46c6-80d4-1f408538957f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.026779 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d51e2531-2b5e-46c6-80d4-1f408538957f-v4-0-config-system-service-ca\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.027507 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d51e2531-2b5e-46c6-80d4-1f408538957f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.027561 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d51e2531-2b5e-46c6-80d4-1f408538957f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.028283 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d51e2531-2b5e-46c6-80d4-1f408538957f-v4-0-config-system-session\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.028774 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d51e2531-2b5e-46c6-80d4-1f408538957f-v4-0-config-user-template-error\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.029528 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d51e2531-2b5e-46c6-80d4-1f408538957f-v4-0-config-system-router-certs\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.030269 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d51e2531-2b5e-46c6-80d4-1f408538957f-v4-0-config-user-template-login\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.030570 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d51e2531-2b5e-46c6-80d4-1f408538957f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.032952 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d51e2531-2b5e-46c6-80d4-1f408538957f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.040128 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv4jp\" (UniqueName: \"kubernetes.io/projected/d51e2531-2b5e-46c6-80d4-1f408538957f-kube-api-access-vv4jp\") pod \"oauth-openshift-6dcf56cb87-rv4n7\" (UID: \"d51e2531-2b5e-46c6-80d4-1f408538957f\") " pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.102138 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.347860 5125 generic.go:358] "Generic (PLEG): container finished" podID="cdb7a298-ac30-410b-9ab7-a060a428e88b" containerID="18077ee2d09c52b4f773f68b9c88e0c9f6e8ae990b8d07b94a0e92eeb4b42499" exitCode=0 Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.347936 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" event={"ID":"cdb7a298-ac30-410b-9ab7-a060a428e88b","Type":"ContainerDied","Data":"18077ee2d09c52b4f773f68b9c88e0c9f6e8ae990b8d07b94a0e92eeb4b42499"} Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.347962 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" event={"ID":"cdb7a298-ac30-410b-9ab7-a060a428e88b","Type":"ContainerDied","Data":"ecbace5f6958b3269162e07d5ed74ede4f32ab7a84e9902a45c2dbfbae19f17d"} Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.347977 5125 scope.go:117] "RemoveContainer" containerID="18077ee2d09c52b4f773f68b9c88e0c9f6e8ae990b8d07b94a0e92eeb4b42499" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.348092 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-2wvch" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.374888 5125 scope.go:117] "RemoveContainer" containerID="18077ee2d09c52b4f773f68b9c88e0c9f6e8ae990b8d07b94a0e92eeb4b42499" Dec 08 19:32:09 crc kubenswrapper[5125]: E1208 19:32:09.375408 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18077ee2d09c52b4f773f68b9c88e0c9f6e8ae990b8d07b94a0e92eeb4b42499\": container with ID starting with 18077ee2d09c52b4f773f68b9c88e0c9f6e8ae990b8d07b94a0e92eeb4b42499 not found: ID does not exist" containerID="18077ee2d09c52b4f773f68b9c88e0c9f6e8ae990b8d07b94a0e92eeb4b42499" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.375708 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18077ee2d09c52b4f773f68b9c88e0c9f6e8ae990b8d07b94a0e92eeb4b42499"} err="failed to get container status \"18077ee2d09c52b4f773f68b9c88e0c9f6e8ae990b8d07b94a0e92eeb4b42499\": rpc error: code = NotFound desc = could not find container \"18077ee2d09c52b4f773f68b9c88e0c9f6e8ae990b8d07b94a0e92eeb4b42499\": container with ID starting with 18077ee2d09c52b4f773f68b9c88e0c9f6e8ae990b8d07b94a0e92eeb4b42499 not found: ID does not exist" Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.385251 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-2wvch"] Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.389620 5125 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-2wvch"] Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.500869 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7"] Dec 08 19:32:09 crc kubenswrapper[5125]: W1208 19:32:09.508803 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd51e2531_2b5e_46c6_80d4_1f408538957f.slice/crio-d0d63983eb91b0f0ca3b0f89c0522590907f2134a105adf7584f31f1458e6bc8 WatchSource:0}: Error finding container d0d63983eb91b0f0ca3b0f89c0522590907f2134a105adf7584f31f1458e6bc8: Status 404 returned error can't find the container with id d0d63983eb91b0f0ca3b0f89c0522590907f2134a105adf7584f31f1458e6bc8 Dec 08 19:32:09 crc kubenswrapper[5125]: I1208 19:32:09.775831 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdb7a298-ac30-410b-9ab7-a060a428e88b" path="/var/lib/kubelet/pods/cdb7a298-ac30-410b-9ab7-a060a428e88b/volumes" Dec 08 19:32:10 crc kubenswrapper[5125]: I1208 19:32:10.356422 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" event={"ID":"d51e2531-2b5e-46c6-80d4-1f408538957f","Type":"ContainerStarted","Data":"5d10f9f98437220a7ca57107e0343431957059a129a6fc23c21d78997cc8477f"} Dec 08 19:32:10 crc kubenswrapper[5125]: I1208 19:32:10.356472 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" event={"ID":"d51e2531-2b5e-46c6-80d4-1f408538957f","Type":"ContainerStarted","Data":"d0d63983eb91b0f0ca3b0f89c0522590907f2134a105adf7584f31f1458e6bc8"} Dec 08 19:32:10 crc kubenswrapper[5125]: I1208 19:32:10.356794 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:10 crc kubenswrapper[5125]: I1208 19:32:10.378252 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" podStartSLOduration=27.378231688 podStartE2EDuration="27.378231688s" podCreationTimestamp="2025-12-08 19:31:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:32:10.376199202 +0000 UTC m=+187.146689486" watchObservedRunningTime="2025-12-08 19:32:10.378231688 +0000 UTC m=+187.148721962" Dec 08 19:32:10 crc kubenswrapper[5125]: I1208 19:32:10.532501 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6dcf56cb87-rv4n7" Dec 08 19:32:23 crc kubenswrapper[5125]: I1208 19:32:23.263317 5125 ???:1] "http: TLS handshake error from 192.168.126.11:39088: no serving certificate available for the kubelet" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.588395 5125 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.589403 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://367e85a4fdaaf613020dc8e54f3690d4f81d5320b750fbfa1d704a7b7a9e71cb" gracePeriod=15 Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.589452 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://6d33cb163457c854b355765916b3c29d258a9b0db805a51c89bd221aba35fb12" gracePeriod=15 Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.589575 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://8c37e3585615ba4ff1e0e7d348bf306b89181474b72aebe5290f9cf2a9c706d0" gracePeriod=15 Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.589557 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://be7cc8d52376599fa6e20ccc45f43544f765f5d0ca901360045e14c3441a4c05" gracePeriod=15 Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.589591 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://a5e4699670d62181c1fafae8281271f7dd7e3a3694a21aa85a0431dc61994c3c" gracePeriod=15 Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.591930 5125 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.592517 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.592536 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.592551 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.592557 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.592563 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.592569 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.592577 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.592584 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.592593 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.592599 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.592634 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.592640 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.592651 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.592656 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.592661 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.592667 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.592672 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.592678 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.592753 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.592761 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.592768 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.592775 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.592786 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.592794 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.592802 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.592883 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.592889 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.592980 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.593145 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.611263 5125 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.619909 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.625560 5125 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.648296 5125 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.770735 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.770825 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.770920 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.770966 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.771009 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.771182 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.771276 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.771372 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.771426 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.771525 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.872315 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.872443 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.872714 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.872834 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.872861 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.872881 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.872921 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.872918 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.872943 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.872984 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.873043 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.873068 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.873089 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.873126 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.873127 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.873150 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.873190 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.873294 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.874346 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:33 crc kubenswrapper[5125]: I1208 19:32:33.874361 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:34 crc kubenswrapper[5125]: I1208 19:32:34.501849 5125 generic.go:358] "Generic (PLEG): container finished" podID="d84464a9-ebd2-4e20-8196-6d468034e0cc" containerID="d15fdef3f46634f7354199904baf3174702ad43607bb04d73e8819d54b4bc418" exitCode=0 Dec 08 19:32:34 crc kubenswrapper[5125]: I1208 19:32:34.501917 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"d84464a9-ebd2-4e20-8196-6d468034e0cc","Type":"ContainerDied","Data":"d15fdef3f46634f7354199904baf3174702ad43607bb04d73e8819d54b4bc418"} Dec 08 19:32:34 crc kubenswrapper[5125]: I1208 19:32:34.503373 5125 status_manager.go:895] "Failed to get status for pod" podUID="d84464a9-ebd2-4e20-8196-6d468034e0cc" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 08 19:32:34 crc kubenswrapper[5125]: I1208 19:32:34.505643 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 19:32:34 crc kubenswrapper[5125]: I1208 19:32:34.507444 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 19:32:34 crc kubenswrapper[5125]: I1208 19:32:34.508468 5125 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="367e85a4fdaaf613020dc8e54f3690d4f81d5320b750fbfa1d704a7b7a9e71cb" exitCode=0 Dec 08 19:32:34 crc kubenswrapper[5125]: I1208 19:32:34.508484 5125 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="be7cc8d52376599fa6e20ccc45f43544f765f5d0ca901360045e14c3441a4c05" exitCode=0 Dec 08 19:32:34 crc kubenswrapper[5125]: I1208 19:32:34.508490 5125 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="6d33cb163457c854b355765916b3c29d258a9b0db805a51c89bd221aba35fb12" exitCode=0 Dec 08 19:32:34 crc kubenswrapper[5125]: I1208 19:32:34.508498 5125 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="8c37e3585615ba4ff1e0e7d348bf306b89181474b72aebe5290f9cf2a9c706d0" exitCode=2 Dec 08 19:32:34 crc kubenswrapper[5125]: I1208 19:32:34.508565 5125 scope.go:117] "RemoveContainer" containerID="346669eecef937e5745cefc16b2a292bb25eb93c0f83fb5cb68a7edbae4eb1af" Dec 08 19:32:34 crc kubenswrapper[5125]: E1208 19:32:34.597886 5125 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:32:34Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:32:34Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:32:34Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:32:34Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 08 19:32:34 crc kubenswrapper[5125]: E1208 19:32:34.599093 5125 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 08 19:32:34 crc kubenswrapper[5125]: E1208 19:32:34.599983 5125 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 08 19:32:34 crc kubenswrapper[5125]: E1208 19:32:34.600993 5125 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 08 19:32:34 crc kubenswrapper[5125]: E1208 19:32:34.601541 5125 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 08 19:32:34 crc kubenswrapper[5125]: E1208 19:32:34.601579 5125 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 19:32:35 crc kubenswrapper[5125]: I1208 19:32:35.867144 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.102835 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.104025 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.104775 5125 status_manager.go:895] "Failed to get status for pod" podUID="d84464a9-ebd2-4e20-8196-6d468034e0cc" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.104830 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.105106 5125 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.105705 5125 status_manager.go:895] "Failed to get status for pod" podUID="d84464a9-ebd2-4e20-8196-6d468034e0cc" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.106098 5125 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.180898 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.181053 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.181080 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.181184 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d84464a9-ebd2-4e20-8196-6d468034e0cc-var-lock\") pod \"d84464a9-ebd2-4e20-8196-6d468034e0cc\" (UID: \"d84464a9-ebd2-4e20-8196-6d468034e0cc\") " Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.181269 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.181303 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d84464a9-ebd2-4e20-8196-6d468034e0cc-var-lock" (OuterVolumeSpecName: "var-lock") pod "d84464a9-ebd2-4e20-8196-6d468034e0cc" (UID: "d84464a9-ebd2-4e20-8196-6d468034e0cc"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.181359 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d84464a9-ebd2-4e20-8196-6d468034e0cc-kube-api-access\") pod \"d84464a9-ebd2-4e20-8196-6d468034e0cc\" (UID: \"d84464a9-ebd2-4e20-8196-6d468034e0cc\") " Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.181534 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.181594 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d84464a9-ebd2-4e20-8196-6d468034e0cc-kubelet-dir\") pod \"d84464a9-ebd2-4e20-8196-6d468034e0cc\" (UID: \"d84464a9-ebd2-4e20-8196-6d468034e0cc\") " Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.181666 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.181669 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.181725 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.181735 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d84464a9-ebd2-4e20-8196-6d468034e0cc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d84464a9-ebd2-4e20-8196-6d468034e0cc" (UID: "d84464a9-ebd2-4e20-8196-6d468034e0cc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.182299 5125 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.182333 5125 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d84464a9-ebd2-4e20-8196-6d468034e0cc-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.182352 5125 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.182369 5125 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.182385 5125 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d84464a9-ebd2-4e20-8196-6d468034e0cc-var-lock\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.182546 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.187779 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.192239 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d84464a9-ebd2-4e20-8196-6d468034e0cc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d84464a9-ebd2-4e20-8196-6d468034e0cc" (UID: "d84464a9-ebd2-4e20-8196-6d468034e0cc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.283252 5125 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.283543 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d84464a9-ebd2-4e20-8196-6d468034e0cc-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.283558 5125 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.884203 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.887719 5125 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="a5e4699670d62181c1fafae8281271f7dd7e3a3694a21aa85a0431dc61994c3c" exitCode=0 Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.887890 5125 scope.go:117] "RemoveContainer" containerID="367e85a4fdaaf613020dc8e54f3690d4f81d5320b750fbfa1d704a7b7a9e71cb" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.888049 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.890318 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"d84464a9-ebd2-4e20-8196-6d468034e0cc","Type":"ContainerDied","Data":"98d11b3be6b7862a40f067690f7f75fe12e28d51209d4041466ed261ef9e3742"} Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.890351 5125 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98d11b3be6b7862a40f067690f7f75fe12e28d51209d4041466ed261ef9e3742" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.890499 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.909964 5125 status_manager.go:895] "Failed to get status for pod" podUID="d84464a9-ebd2-4e20-8196-6d468034e0cc" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.910542 5125 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.910973 5125 status_manager.go:895] "Failed to get status for pod" podUID="d84464a9-ebd2-4e20-8196-6d468034e0cc" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.911418 5125 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.913593 5125 scope.go:117] "RemoveContainer" containerID="be7cc8d52376599fa6e20ccc45f43544f765f5d0ca901360045e14c3441a4c05" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.926244 5125 scope.go:117] "RemoveContainer" containerID="6d33cb163457c854b355765916b3c29d258a9b0db805a51c89bd221aba35fb12" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.944379 5125 scope.go:117] "RemoveContainer" containerID="8c37e3585615ba4ff1e0e7d348bf306b89181474b72aebe5290f9cf2a9c706d0" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.958139 5125 scope.go:117] "RemoveContainer" containerID="a5e4699670d62181c1fafae8281271f7dd7e3a3694a21aa85a0431dc61994c3c" Dec 08 19:32:36 crc kubenswrapper[5125]: I1208 19:32:36.971934 5125 scope.go:117] "RemoveContainer" containerID="3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e" Dec 08 19:32:37 crc kubenswrapper[5125]: I1208 19:32:37.033564 5125 scope.go:117] "RemoveContainer" containerID="367e85a4fdaaf613020dc8e54f3690d4f81d5320b750fbfa1d704a7b7a9e71cb" Dec 08 19:32:37 crc kubenswrapper[5125]: E1208 19:32:37.033950 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"367e85a4fdaaf613020dc8e54f3690d4f81d5320b750fbfa1d704a7b7a9e71cb\": container with ID starting with 367e85a4fdaaf613020dc8e54f3690d4f81d5320b750fbfa1d704a7b7a9e71cb not found: ID does not exist" containerID="367e85a4fdaaf613020dc8e54f3690d4f81d5320b750fbfa1d704a7b7a9e71cb" Dec 08 19:32:37 crc kubenswrapper[5125]: I1208 19:32:37.033978 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"367e85a4fdaaf613020dc8e54f3690d4f81d5320b750fbfa1d704a7b7a9e71cb"} err="failed to get container status \"367e85a4fdaaf613020dc8e54f3690d4f81d5320b750fbfa1d704a7b7a9e71cb\": rpc error: code = NotFound desc = could not find container \"367e85a4fdaaf613020dc8e54f3690d4f81d5320b750fbfa1d704a7b7a9e71cb\": container with ID starting with 367e85a4fdaaf613020dc8e54f3690d4f81d5320b750fbfa1d704a7b7a9e71cb not found: ID does not exist" Dec 08 19:32:37 crc kubenswrapper[5125]: I1208 19:32:37.033996 5125 scope.go:117] "RemoveContainer" containerID="be7cc8d52376599fa6e20ccc45f43544f765f5d0ca901360045e14c3441a4c05" Dec 08 19:32:37 crc kubenswrapper[5125]: E1208 19:32:37.034316 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be7cc8d52376599fa6e20ccc45f43544f765f5d0ca901360045e14c3441a4c05\": container with ID starting with be7cc8d52376599fa6e20ccc45f43544f765f5d0ca901360045e14c3441a4c05 not found: ID does not exist" containerID="be7cc8d52376599fa6e20ccc45f43544f765f5d0ca901360045e14c3441a4c05" Dec 08 19:32:37 crc kubenswrapper[5125]: I1208 19:32:37.034334 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be7cc8d52376599fa6e20ccc45f43544f765f5d0ca901360045e14c3441a4c05"} err="failed to get container status \"be7cc8d52376599fa6e20ccc45f43544f765f5d0ca901360045e14c3441a4c05\": rpc error: code = NotFound desc = could not find container \"be7cc8d52376599fa6e20ccc45f43544f765f5d0ca901360045e14c3441a4c05\": container with ID starting with be7cc8d52376599fa6e20ccc45f43544f765f5d0ca901360045e14c3441a4c05 not found: ID does not exist" Dec 08 19:32:37 crc kubenswrapper[5125]: I1208 19:32:37.034345 5125 scope.go:117] "RemoveContainer" containerID="6d33cb163457c854b355765916b3c29d258a9b0db805a51c89bd221aba35fb12" Dec 08 19:32:37 crc kubenswrapper[5125]: E1208 19:32:37.034660 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d33cb163457c854b355765916b3c29d258a9b0db805a51c89bd221aba35fb12\": container with ID starting with 6d33cb163457c854b355765916b3c29d258a9b0db805a51c89bd221aba35fb12 not found: ID does not exist" containerID="6d33cb163457c854b355765916b3c29d258a9b0db805a51c89bd221aba35fb12" Dec 08 19:32:37 crc kubenswrapper[5125]: I1208 19:32:37.034680 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d33cb163457c854b355765916b3c29d258a9b0db805a51c89bd221aba35fb12"} err="failed to get container status \"6d33cb163457c854b355765916b3c29d258a9b0db805a51c89bd221aba35fb12\": rpc error: code = NotFound desc = could not find container \"6d33cb163457c854b355765916b3c29d258a9b0db805a51c89bd221aba35fb12\": container with ID starting with 6d33cb163457c854b355765916b3c29d258a9b0db805a51c89bd221aba35fb12 not found: ID does not exist" Dec 08 19:32:37 crc kubenswrapper[5125]: I1208 19:32:37.034693 5125 scope.go:117] "RemoveContainer" containerID="8c37e3585615ba4ff1e0e7d348bf306b89181474b72aebe5290f9cf2a9c706d0" Dec 08 19:32:37 crc kubenswrapper[5125]: E1208 19:32:37.034989 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c37e3585615ba4ff1e0e7d348bf306b89181474b72aebe5290f9cf2a9c706d0\": container with ID starting with 8c37e3585615ba4ff1e0e7d348bf306b89181474b72aebe5290f9cf2a9c706d0 not found: ID does not exist" containerID="8c37e3585615ba4ff1e0e7d348bf306b89181474b72aebe5290f9cf2a9c706d0" Dec 08 19:32:37 crc kubenswrapper[5125]: I1208 19:32:37.035162 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c37e3585615ba4ff1e0e7d348bf306b89181474b72aebe5290f9cf2a9c706d0"} err="failed to get container status \"8c37e3585615ba4ff1e0e7d348bf306b89181474b72aebe5290f9cf2a9c706d0\": rpc error: code = NotFound desc = could not find container \"8c37e3585615ba4ff1e0e7d348bf306b89181474b72aebe5290f9cf2a9c706d0\": container with ID starting with 8c37e3585615ba4ff1e0e7d348bf306b89181474b72aebe5290f9cf2a9c706d0 not found: ID does not exist" Dec 08 19:32:37 crc kubenswrapper[5125]: I1208 19:32:37.035308 5125 scope.go:117] "RemoveContainer" containerID="a5e4699670d62181c1fafae8281271f7dd7e3a3694a21aa85a0431dc61994c3c" Dec 08 19:32:37 crc kubenswrapper[5125]: E1208 19:32:37.035699 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5e4699670d62181c1fafae8281271f7dd7e3a3694a21aa85a0431dc61994c3c\": container with ID starting with a5e4699670d62181c1fafae8281271f7dd7e3a3694a21aa85a0431dc61994c3c not found: ID does not exist" containerID="a5e4699670d62181c1fafae8281271f7dd7e3a3694a21aa85a0431dc61994c3c" Dec 08 19:32:37 crc kubenswrapper[5125]: I1208 19:32:37.035736 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5e4699670d62181c1fafae8281271f7dd7e3a3694a21aa85a0431dc61994c3c"} err="failed to get container status \"a5e4699670d62181c1fafae8281271f7dd7e3a3694a21aa85a0431dc61994c3c\": rpc error: code = NotFound desc = could not find container \"a5e4699670d62181c1fafae8281271f7dd7e3a3694a21aa85a0431dc61994c3c\": container with ID starting with a5e4699670d62181c1fafae8281271f7dd7e3a3694a21aa85a0431dc61994c3c not found: ID does not exist" Dec 08 19:32:37 crc kubenswrapper[5125]: I1208 19:32:37.035761 5125 scope.go:117] "RemoveContainer" containerID="3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e" Dec 08 19:32:37 crc kubenswrapper[5125]: E1208 19:32:37.036111 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\": container with ID starting with 3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e not found: ID does not exist" containerID="3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e" Dec 08 19:32:37 crc kubenswrapper[5125]: I1208 19:32:37.036214 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e"} err="failed to get container status \"3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\": rpc error: code = NotFound desc = could not find container \"3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e\": container with ID starting with 3cda31233ce6e3e5aed8d15ddb95d6b240aaa7d86c013a045413b454b2a6313e not found: ID does not exist" Dec 08 19:32:37 crc kubenswrapper[5125]: E1208 19:32:37.705171 5125 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 08 19:32:37 crc kubenswrapper[5125]: E1208 19:32:37.706555 5125 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 08 19:32:37 crc kubenswrapper[5125]: E1208 19:32:37.707233 5125 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 08 19:32:37 crc kubenswrapper[5125]: E1208 19:32:37.707915 5125 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 08 19:32:37 crc kubenswrapper[5125]: E1208 19:32:37.708277 5125 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 08 19:32:37 crc kubenswrapper[5125]: I1208 19:32:37.708325 5125 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 08 19:32:37 crc kubenswrapper[5125]: E1208 19:32:37.708717 5125 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="200ms" Dec 08 19:32:37 crc kubenswrapper[5125]: I1208 19:32:37.778296 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Dec 08 19:32:37 crc kubenswrapper[5125]: E1208 19:32:37.838297 5125 desired_state_of_world_populator.go:305] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.174:6443: connect: connection refused" pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" volumeName="registry-storage" Dec 08 19:32:37 crc kubenswrapper[5125]: E1208 19:32:37.910571 5125 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="400ms" Dec 08 19:32:38 crc kubenswrapper[5125]: E1208 19:32:38.311940 5125 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="800ms" Dec 08 19:32:38 crc kubenswrapper[5125]: E1208 19:32:38.650407 5125 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.174:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:38 crc kubenswrapper[5125]: I1208 19:32:38.651261 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:38 crc kubenswrapper[5125]: E1208 19:32:38.688099 5125 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.174:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187f5461f71c195f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:32:38.686898527 +0000 UTC m=+215.457388811,LastTimestamp:2025-12-08 19:32:38.686898527 +0000 UTC m=+215.457388811,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:32:38 crc kubenswrapper[5125]: I1208 19:32:38.905235 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"700642cc61133a050036c55863f460466dc5c61dc7a144b58fa62ade664acf19"} Dec 08 19:32:39 crc kubenswrapper[5125]: E1208 19:32:39.113061 5125 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="1.6s" Dec 08 19:32:39 crc kubenswrapper[5125]: I1208 19:32:39.911914 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"f38e8772336fb936b11ac92000c9d5e8a3bba4479c7d63f39833e2c4b5cee834"} Dec 08 19:32:39 crc kubenswrapper[5125]: I1208 19:32:39.912086 5125 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:39 crc kubenswrapper[5125]: I1208 19:32:39.912510 5125 status_manager.go:895] "Failed to get status for pod" podUID="d84464a9-ebd2-4e20-8196-6d468034e0cc" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 08 19:32:39 crc kubenswrapper[5125]: E1208 19:32:39.912585 5125 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.174:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:40 crc kubenswrapper[5125]: E1208 19:32:40.717549 5125 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="3.2s" Dec 08 19:32:40 crc kubenswrapper[5125]: I1208 19:32:40.919450 5125 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:40 crc kubenswrapper[5125]: E1208 19:32:40.920111 5125 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.174:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:32:41 crc kubenswrapper[5125]: E1208 19:32:41.444344 5125 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.174:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187f5461f71c195f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 19:32:38.686898527 +0000 UTC m=+215.457388811,LastTimestamp:2025-12-08 19:32:38.686898527 +0000 UTC m=+215.457388811,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 19:32:43 crc kubenswrapper[5125]: I1208 19:32:43.772012 5125 status_manager.go:895] "Failed to get status for pod" podUID="d84464a9-ebd2-4e20-8196-6d468034e0cc" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 08 19:32:43 crc kubenswrapper[5125]: E1208 19:32:43.918067 5125 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" interval="6.4s" Dec 08 19:32:44 crc kubenswrapper[5125]: E1208 19:32:44.618659 5125 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:32:44Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:32:44Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:32:44Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T19:32:44Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 08 19:32:44 crc kubenswrapper[5125]: E1208 19:32:44.619829 5125 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 08 19:32:44 crc kubenswrapper[5125]: E1208 19:32:44.620499 5125 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 08 19:32:44 crc kubenswrapper[5125]: E1208 19:32:44.620824 5125 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 08 19:32:44 crc kubenswrapper[5125]: E1208 19:32:44.621250 5125 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 08 19:32:44 crc kubenswrapper[5125]: E1208 19:32:44.621575 5125 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 19:32:46 crc kubenswrapper[5125]: I1208 19:32:46.766575 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:46 crc kubenswrapper[5125]: I1208 19:32:46.769033 5125 status_manager.go:895] "Failed to get status for pod" podUID="d84464a9-ebd2-4e20-8196-6d468034e0cc" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 08 19:32:46 crc kubenswrapper[5125]: I1208 19:32:46.798433 5125 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f0a65da2-1f6c-4d8c-9235-319e35ed53e6" Dec 08 19:32:46 crc kubenswrapper[5125]: I1208 19:32:46.798491 5125 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f0a65da2-1f6c-4d8c-9235-319e35ed53e6" Dec 08 19:32:46 crc kubenswrapper[5125]: E1208 19:32:46.799191 5125 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:46 crc kubenswrapper[5125]: I1208 19:32:46.799863 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:46 crc kubenswrapper[5125]: I1208 19:32:46.959088 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"fe4b3161fb0ee3e2ef178a1e221114a8f08155b9507876b89286c3f9057829bc"} Dec 08 19:32:47 crc kubenswrapper[5125]: I1208 19:32:47.967131 5125 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="994d0c76ce07a0b882ac05b857bc76e4fd28fa9da7a27069c1df77185504bdf6" exitCode=0 Dec 08 19:32:47 crc kubenswrapper[5125]: I1208 19:32:47.967203 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"994d0c76ce07a0b882ac05b857bc76e4fd28fa9da7a27069c1df77185504bdf6"} Dec 08 19:32:47 crc kubenswrapper[5125]: I1208 19:32:47.967575 5125 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f0a65da2-1f6c-4d8c-9235-319e35ed53e6" Dec 08 19:32:47 crc kubenswrapper[5125]: I1208 19:32:47.967633 5125 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f0a65da2-1f6c-4d8c-9235-319e35ed53e6" Dec 08 19:32:47 crc kubenswrapper[5125]: E1208 19:32:47.968060 5125 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:47 crc kubenswrapper[5125]: I1208 19:32:47.968131 5125 status_manager.go:895] "Failed to get status for pod" podUID="d84464a9-ebd2-4e20-8196-6d468034e0cc" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.174:6443: connect: connection refused" Dec 08 19:32:48 crc kubenswrapper[5125]: I1208 19:32:48.985979 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"4516ef12ca07872dffe7bb1e6c7ffceff6421407ddb7c1704103d464e4dfe27a"} Dec 08 19:32:48 crc kubenswrapper[5125]: I1208 19:32:48.986336 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"3195db239756d425c5bea94d2f3b57ca70eb78b2d1bb0d460709c26162432a4d"} Dec 08 19:32:48 crc kubenswrapper[5125]: I1208 19:32:48.986351 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"97b680b26848e947f4cd5b137a943a8727d929bd352c8042f443befbe70a95cf"} Dec 08 19:32:48 crc kubenswrapper[5125]: I1208 19:32:48.992526 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 19:32:48 crc kubenswrapper[5125]: I1208 19:32:48.992583 5125 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="d1a6ee7cc39cbce21b5d44e71db4af1388154261b0f4e46bf80a1c6aace1d18b" exitCode=1 Dec 08 19:32:48 crc kubenswrapper[5125]: I1208 19:32:48.992998 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"d1a6ee7cc39cbce21b5d44e71db4af1388154261b0f4e46bf80a1c6aace1d18b"} Dec 08 19:32:48 crc kubenswrapper[5125]: I1208 19:32:48.993584 5125 scope.go:117] "RemoveContainer" containerID="d1a6ee7cc39cbce21b5d44e71db4af1388154261b0f4e46bf80a1c6aace1d18b" Dec 08 19:32:49 crc kubenswrapper[5125]: I1208 19:32:49.999772 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 19:32:50 crc kubenswrapper[5125]: I1208 19:32:49.999925 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"b344f715e603cc9c18e212c1105e658d1eb77cad903502d2e4bed07776c34dfb"} Dec 08 19:32:50 crc kubenswrapper[5125]: I1208 19:32:50.003036 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"b6f4f132e3861b977b97fb480c86b31f61de2fe039ec2810fc62e9ddc994fe87"} Dec 08 19:32:50 crc kubenswrapper[5125]: I1208 19:32:50.003064 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"d30b38804a5f986a4ce53870f4cb44ad90cc93d037f362b8d8cdb5a7afb75caf"} Dec 08 19:32:50 crc kubenswrapper[5125]: I1208 19:32:50.003270 5125 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f0a65da2-1f6c-4d8c-9235-319e35ed53e6" Dec 08 19:32:50 crc kubenswrapper[5125]: I1208 19:32:50.003290 5125 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f0a65da2-1f6c-4d8c-9235-319e35ed53e6" Dec 08 19:32:50 crc kubenswrapper[5125]: I1208 19:32:50.003570 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:51 crc kubenswrapper[5125]: I1208 19:32:51.101941 5125 patch_prober.go:28] interesting pod/machine-config-daemon-slhjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:32:51 crc kubenswrapper[5125]: I1208 19:32:51.103206 5125 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" podUID="d8cea827-b8e3-4d92-adea-df0afd2397da" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:32:51 crc kubenswrapper[5125]: I1208 19:32:51.800228 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:51 crc kubenswrapper[5125]: I1208 19:32:51.800687 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:51 crc kubenswrapper[5125]: I1208 19:32:51.807217 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:52 crc kubenswrapper[5125]: I1208 19:32:52.163796 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:32:52 crc kubenswrapper[5125]: I1208 19:32:52.170055 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:32:53 crc kubenswrapper[5125]: I1208 19:32:53.019932 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:32:55 crc kubenswrapper[5125]: I1208 19:32:55.411373 5125 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:55 crc kubenswrapper[5125]: I1208 19:32:55.411428 5125 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:55 crc kubenswrapper[5125]: I1208 19:32:55.553547 5125 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="ea1555b8-ddd4-4f9b-8027-4a6ea287c50b" Dec 08 19:32:56 crc kubenswrapper[5125]: I1208 19:32:56.036472 5125 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f0a65da2-1f6c-4d8c-9235-319e35ed53e6" Dec 08 19:32:56 crc kubenswrapper[5125]: I1208 19:32:56.036834 5125 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f0a65da2-1f6c-4d8c-9235-319e35ed53e6" Dec 08 19:32:56 crc kubenswrapper[5125]: I1208 19:32:56.039679 5125 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="ea1555b8-ddd4-4f9b-8027-4a6ea287c50b" Dec 08 19:32:56 crc kubenswrapper[5125]: I1208 19:32:56.040629 5125 status_manager.go:346] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://97b680b26848e947f4cd5b137a943a8727d929bd352c8042f443befbe70a95cf" Dec 08 19:32:56 crc kubenswrapper[5125]: I1208 19:32:56.040649 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:32:57 crc kubenswrapper[5125]: I1208 19:32:57.041889 5125 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f0a65da2-1f6c-4d8c-9235-319e35ed53e6" Dec 08 19:32:57 crc kubenswrapper[5125]: I1208 19:32:57.041933 5125 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f0a65da2-1f6c-4d8c-9235-319e35ed53e6" Dec 08 19:32:57 crc kubenswrapper[5125]: I1208 19:32:57.046000 5125 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="ea1555b8-ddd4-4f9b-8027-4a6ea287c50b" Dec 08 19:33:04 crc kubenswrapper[5125]: I1208 19:33:04.030200 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 19:33:05 crc kubenswrapper[5125]: I1208 19:33:05.514031 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 08 19:33:05 crc kubenswrapper[5125]: I1208 19:33:05.672238 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 08 19:33:05 crc kubenswrapper[5125]: I1208 19:33:05.921578 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 08 19:33:06 crc kubenswrapper[5125]: I1208 19:33:06.128436 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 08 19:33:06 crc kubenswrapper[5125]: I1208 19:33:06.328951 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 08 19:33:06 crc kubenswrapper[5125]: I1208 19:33:06.336731 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 08 19:33:06 crc kubenswrapper[5125]: I1208 19:33:06.349978 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 08 19:33:06 crc kubenswrapper[5125]: I1208 19:33:06.363015 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 08 19:33:06 crc kubenswrapper[5125]: I1208 19:33:06.593279 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 08 19:33:06 crc kubenswrapper[5125]: I1208 19:33:06.670278 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 08 19:33:06 crc kubenswrapper[5125]: I1208 19:33:06.774640 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 08 19:33:06 crc kubenswrapper[5125]: I1208 19:33:06.904976 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:07 crc kubenswrapper[5125]: I1208 19:33:07.215411 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 08 19:33:07 crc kubenswrapper[5125]: I1208 19:33:07.477201 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 08 19:33:07 crc kubenswrapper[5125]: I1208 19:33:07.486173 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 08 19:33:08 crc kubenswrapper[5125]: I1208 19:33:08.040784 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 08 19:33:08 crc kubenswrapper[5125]: I1208 19:33:08.206056 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 08 19:33:08 crc kubenswrapper[5125]: I1208 19:33:08.358207 5125 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 19:33:08 crc kubenswrapper[5125]: I1208 19:33:08.417528 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 08 19:33:08 crc kubenswrapper[5125]: I1208 19:33:08.530210 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 08 19:33:08 crc kubenswrapper[5125]: I1208 19:33:08.603498 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 08 19:33:08 crc kubenswrapper[5125]: I1208 19:33:08.612584 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 08 19:33:08 crc kubenswrapper[5125]: I1208 19:33:08.820379 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 08 19:33:08 crc kubenswrapper[5125]: I1208 19:33:08.875550 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:08 crc kubenswrapper[5125]: I1208 19:33:08.942144 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 08 19:33:08 crc kubenswrapper[5125]: I1208 19:33:08.967775 5125 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 19:33:09 crc kubenswrapper[5125]: I1208 19:33:09.183840 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:09 crc kubenswrapper[5125]: I1208 19:33:09.240459 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 08 19:33:09 crc kubenswrapper[5125]: I1208 19:33:09.296926 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 08 19:33:09 crc kubenswrapper[5125]: I1208 19:33:09.349583 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 08 19:33:09 crc kubenswrapper[5125]: I1208 19:33:09.413181 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 08 19:33:09 crc kubenswrapper[5125]: I1208 19:33:09.437327 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:09 crc kubenswrapper[5125]: I1208 19:33:09.479634 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 08 19:33:09 crc kubenswrapper[5125]: I1208 19:33:09.488655 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 08 19:33:09 crc kubenswrapper[5125]: I1208 19:33:09.515326 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 08 19:33:09 crc kubenswrapper[5125]: I1208 19:33:09.542885 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:09 crc kubenswrapper[5125]: I1208 19:33:09.608215 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 08 19:33:09 crc kubenswrapper[5125]: I1208 19:33:09.610645 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 08 19:33:09 crc kubenswrapper[5125]: I1208 19:33:09.634200 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 08 19:33:09 crc kubenswrapper[5125]: I1208 19:33:09.651657 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 08 19:33:09 crc kubenswrapper[5125]: I1208 19:33:09.690201 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 08 19:33:09 crc kubenswrapper[5125]: I1208 19:33:09.697166 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 08 19:33:09 crc kubenswrapper[5125]: I1208 19:33:09.790145 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 08 19:33:09 crc kubenswrapper[5125]: I1208 19:33:09.847020 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 08 19:33:09 crc kubenswrapper[5125]: I1208 19:33:09.993153 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 08 19:33:10 crc kubenswrapper[5125]: I1208 19:33:10.037473 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 08 19:33:10 crc kubenswrapper[5125]: I1208 19:33:10.088713 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 08 19:33:10 crc kubenswrapper[5125]: I1208 19:33:10.133942 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:10 crc kubenswrapper[5125]: I1208 19:33:10.172092 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:10 crc kubenswrapper[5125]: I1208 19:33:10.198205 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 08 19:33:10 crc kubenswrapper[5125]: I1208 19:33:10.290777 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 08 19:33:10 crc kubenswrapper[5125]: I1208 19:33:10.301702 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 08 19:33:10 crc kubenswrapper[5125]: I1208 19:33:10.418281 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 08 19:33:10 crc kubenswrapper[5125]: I1208 19:33:10.427452 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 08 19:33:10 crc kubenswrapper[5125]: I1208 19:33:10.452597 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 08 19:33:10 crc kubenswrapper[5125]: I1208 19:33:10.503359 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:10 crc kubenswrapper[5125]: I1208 19:33:10.577656 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 08 19:33:10 crc kubenswrapper[5125]: I1208 19:33:10.634964 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 08 19:33:10 crc kubenswrapper[5125]: I1208 19:33:10.666750 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 08 19:33:10 crc kubenswrapper[5125]: I1208 19:33:10.746839 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 08 19:33:10 crc kubenswrapper[5125]: I1208 19:33:10.778490 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 08 19:33:10 crc kubenswrapper[5125]: I1208 19:33:10.841255 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 08 19:33:10 crc kubenswrapper[5125]: I1208 19:33:10.853477 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:10 crc kubenswrapper[5125]: I1208 19:33:10.879473 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 08 19:33:10 crc kubenswrapper[5125]: I1208 19:33:10.933069 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 08 19:33:11 crc kubenswrapper[5125]: I1208 19:33:11.112289 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 08 19:33:11 crc kubenswrapper[5125]: I1208 19:33:11.225802 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 08 19:33:11 crc kubenswrapper[5125]: I1208 19:33:11.274525 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 08 19:33:11 crc kubenswrapper[5125]: I1208 19:33:11.318339 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 08 19:33:11 crc kubenswrapper[5125]: I1208 19:33:11.361658 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 08 19:33:11 crc kubenswrapper[5125]: I1208 19:33:11.483518 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 08 19:33:11 crc kubenswrapper[5125]: I1208 19:33:11.532974 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 08 19:33:11 crc kubenswrapper[5125]: I1208 19:33:11.550284 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 08 19:33:11 crc kubenswrapper[5125]: I1208 19:33:11.579495 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 08 19:33:11 crc kubenswrapper[5125]: I1208 19:33:11.648789 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 19:33:11 crc kubenswrapper[5125]: I1208 19:33:11.673013 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 08 19:33:11 crc kubenswrapper[5125]: I1208 19:33:11.674697 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 08 19:33:11 crc kubenswrapper[5125]: I1208 19:33:11.717474 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 19:33:11 crc kubenswrapper[5125]: I1208 19:33:11.759127 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:11 crc kubenswrapper[5125]: I1208 19:33:11.776798 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 08 19:33:11 crc kubenswrapper[5125]: I1208 19:33:11.801285 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 08 19:33:11 crc kubenswrapper[5125]: I1208 19:33:11.804516 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 08 19:33:11 crc kubenswrapper[5125]: I1208 19:33:11.884145 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 19:33:11 crc kubenswrapper[5125]: I1208 19:33:11.938254 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 08 19:33:11 crc kubenswrapper[5125]: I1208 19:33:11.964180 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 08 19:33:12 crc kubenswrapper[5125]: I1208 19:33:12.090995 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 08 19:33:12 crc kubenswrapper[5125]: I1208 19:33:12.104022 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 08 19:33:12 crc kubenswrapper[5125]: I1208 19:33:12.185198 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 08 19:33:12 crc kubenswrapper[5125]: I1208 19:33:12.194843 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 08 19:33:12 crc kubenswrapper[5125]: I1208 19:33:12.232381 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 08 19:33:12 crc kubenswrapper[5125]: I1208 19:33:12.241025 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 08 19:33:12 crc kubenswrapper[5125]: I1208 19:33:12.282034 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 08 19:33:12 crc kubenswrapper[5125]: I1208 19:33:12.300450 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 08 19:33:12 crc kubenswrapper[5125]: I1208 19:33:12.370251 5125 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 19:33:12 crc kubenswrapper[5125]: I1208 19:33:12.378242 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:12 crc kubenswrapper[5125]: I1208 19:33:12.444684 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 08 19:33:12 crc kubenswrapper[5125]: I1208 19:33:12.470055 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 08 19:33:12 crc kubenswrapper[5125]: I1208 19:33:12.520403 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 08 19:33:12 crc kubenswrapper[5125]: I1208 19:33:12.550922 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 08 19:33:12 crc kubenswrapper[5125]: I1208 19:33:12.574930 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 08 19:33:12 crc kubenswrapper[5125]: I1208 19:33:12.606268 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 08 19:33:12 crc kubenswrapper[5125]: I1208 19:33:12.777196 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 08 19:33:12 crc kubenswrapper[5125]: I1208 19:33:12.819712 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 08 19:33:12 crc kubenswrapper[5125]: I1208 19:33:12.834325 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 08 19:33:12 crc kubenswrapper[5125]: I1208 19:33:12.872991 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 08 19:33:12 crc kubenswrapper[5125]: I1208 19:33:12.912262 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 08 19:33:12 crc kubenswrapper[5125]: I1208 19:33:12.967968 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 08 19:33:13 crc kubenswrapper[5125]: I1208 19:33:13.061989 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 08 19:33:13 crc kubenswrapper[5125]: I1208 19:33:13.103073 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 08 19:33:13 crc kubenswrapper[5125]: I1208 19:33:13.149239 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 08 19:33:13 crc kubenswrapper[5125]: I1208 19:33:13.171745 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 08 19:33:13 crc kubenswrapper[5125]: I1208 19:33:13.213029 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 08 19:33:13 crc kubenswrapper[5125]: I1208 19:33:13.278906 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 08 19:33:13 crc kubenswrapper[5125]: I1208 19:33:13.296544 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 08 19:33:13 crc kubenswrapper[5125]: I1208 19:33:13.318286 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 08 19:33:13 crc kubenswrapper[5125]: I1208 19:33:13.345518 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 08 19:33:13 crc kubenswrapper[5125]: I1208 19:33:13.356402 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 08 19:33:13 crc kubenswrapper[5125]: I1208 19:33:13.384334 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 08 19:33:13 crc kubenswrapper[5125]: I1208 19:33:13.528990 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 08 19:33:13 crc kubenswrapper[5125]: I1208 19:33:13.594173 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 08 19:33:13 crc kubenswrapper[5125]: I1208 19:33:13.635454 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 08 19:33:13 crc kubenswrapper[5125]: I1208 19:33:13.706570 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 19:33:13 crc kubenswrapper[5125]: I1208 19:33:13.710186 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 08 19:33:13 crc kubenswrapper[5125]: I1208 19:33:13.827202 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 08 19:33:13 crc kubenswrapper[5125]: I1208 19:33:13.855749 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 08 19:33:13 crc kubenswrapper[5125]: I1208 19:33:13.893354 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 08 19:33:13 crc kubenswrapper[5125]: I1208 19:33:13.927204 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:13 crc kubenswrapper[5125]: I1208 19:33:13.937234 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 08 19:33:14 crc kubenswrapper[5125]: I1208 19:33:14.097050 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:14 crc kubenswrapper[5125]: I1208 19:33:14.165412 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 08 19:33:14 crc kubenswrapper[5125]: I1208 19:33:14.229427 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 08 19:33:14 crc kubenswrapper[5125]: I1208 19:33:14.306367 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 08 19:33:14 crc kubenswrapper[5125]: I1208 19:33:14.310024 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 08 19:33:14 crc kubenswrapper[5125]: I1208 19:33:14.395139 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 08 19:33:14 crc kubenswrapper[5125]: I1208 19:33:14.447532 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 08 19:33:14 crc kubenswrapper[5125]: I1208 19:33:14.468032 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 08 19:33:14 crc kubenswrapper[5125]: I1208 19:33:14.473257 5125 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 19:33:14 crc kubenswrapper[5125]: I1208 19:33:14.485004 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 08 19:33:14 crc kubenswrapper[5125]: I1208 19:33:14.513031 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 08 19:33:14 crc kubenswrapper[5125]: I1208 19:33:14.549152 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 08 19:33:14 crc kubenswrapper[5125]: I1208 19:33:14.635726 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 08 19:33:14 crc kubenswrapper[5125]: I1208 19:33:14.664194 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 08 19:33:14 crc kubenswrapper[5125]: I1208 19:33:14.719159 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 08 19:33:14 crc kubenswrapper[5125]: I1208 19:33:14.761553 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 08 19:33:14 crc kubenswrapper[5125]: I1208 19:33:14.917121 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 08 19:33:14 crc kubenswrapper[5125]: I1208 19:33:14.970955 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 08 19:33:15 crc kubenswrapper[5125]: I1208 19:33:15.025539 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 08 19:33:15 crc kubenswrapper[5125]: I1208 19:33:15.042974 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 08 19:33:15 crc kubenswrapper[5125]: I1208 19:33:15.063184 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 08 19:33:15 crc kubenswrapper[5125]: I1208 19:33:15.090700 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 08 19:33:15 crc kubenswrapper[5125]: I1208 19:33:15.148134 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 08 19:33:15 crc kubenswrapper[5125]: I1208 19:33:15.213430 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 08 19:33:15 crc kubenswrapper[5125]: I1208 19:33:15.239377 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 08 19:33:15 crc kubenswrapper[5125]: I1208 19:33:15.246839 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:15 crc kubenswrapper[5125]: I1208 19:33:15.273729 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 08 19:33:15 crc kubenswrapper[5125]: I1208 19:33:15.317471 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 08 19:33:15 crc kubenswrapper[5125]: I1208 19:33:15.375196 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 08 19:33:15 crc kubenswrapper[5125]: I1208 19:33:15.393358 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 08 19:33:15 crc kubenswrapper[5125]: I1208 19:33:15.395518 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 08 19:33:15 crc kubenswrapper[5125]: I1208 19:33:15.446047 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 08 19:33:15 crc kubenswrapper[5125]: I1208 19:33:15.454694 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 08 19:33:15 crc kubenswrapper[5125]: I1208 19:33:15.455142 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 08 19:33:15 crc kubenswrapper[5125]: I1208 19:33:15.500406 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 08 19:33:15 crc kubenswrapper[5125]: I1208 19:33:15.505662 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 08 19:33:15 crc kubenswrapper[5125]: I1208 19:33:15.507016 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 08 19:33:15 crc kubenswrapper[5125]: I1208 19:33:15.509349 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 08 19:33:15 crc kubenswrapper[5125]: I1208 19:33:15.594121 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:15 crc kubenswrapper[5125]: I1208 19:33:15.641171 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 08 19:33:15 crc kubenswrapper[5125]: I1208 19:33:15.658914 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 08 19:33:15 crc kubenswrapper[5125]: I1208 19:33:15.671709 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 08 19:33:15 crc kubenswrapper[5125]: I1208 19:33:15.743462 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 08 19:33:15 crc kubenswrapper[5125]: I1208 19:33:15.775944 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 08 19:33:15 crc kubenswrapper[5125]: I1208 19:33:15.866452 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 08 19:33:15 crc kubenswrapper[5125]: I1208 19:33:15.995495 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:16 crc kubenswrapper[5125]: I1208 19:33:16.029381 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 08 19:33:16 crc kubenswrapper[5125]: I1208 19:33:16.177025 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 08 19:33:16 crc kubenswrapper[5125]: I1208 19:33:16.296551 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 08 19:33:16 crc kubenswrapper[5125]: I1208 19:33:16.346933 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 08 19:33:16 crc kubenswrapper[5125]: I1208 19:33:16.372349 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 08 19:33:16 crc kubenswrapper[5125]: I1208 19:33:16.390818 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 08 19:33:16 crc kubenswrapper[5125]: I1208 19:33:16.398799 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 08 19:33:16 crc kubenswrapper[5125]: I1208 19:33:16.445138 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 08 19:33:16 crc kubenswrapper[5125]: I1208 19:33:16.521583 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 08 19:33:16 crc kubenswrapper[5125]: I1208 19:33:16.530964 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:16 crc kubenswrapper[5125]: I1208 19:33:16.558447 5125 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 08 19:33:16 crc kubenswrapper[5125]: I1208 19:33:16.562848 5125 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 19:33:16 crc kubenswrapper[5125]: I1208 19:33:16.562913 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 19:33:16 crc kubenswrapper[5125]: I1208 19:33:16.568560 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 19:33:16 crc kubenswrapper[5125]: I1208 19:33:16.581744 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.581726506 podStartE2EDuration="21.581726506s" podCreationTimestamp="2025-12-08 19:32:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:33:16.580906614 +0000 UTC m=+253.351396898" watchObservedRunningTime="2025-12-08 19:33:16.581726506 +0000 UTC m=+253.352216790" Dec 08 19:33:16 crc kubenswrapper[5125]: I1208 19:33:16.619797 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 08 19:33:16 crc kubenswrapper[5125]: I1208 19:33:16.727823 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 08 19:33:16 crc kubenswrapper[5125]: I1208 19:33:16.787017 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 08 19:33:16 crc kubenswrapper[5125]: I1208 19:33:16.852895 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:16 crc kubenswrapper[5125]: I1208 19:33:16.912875 5125 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 19:33:16 crc kubenswrapper[5125]: I1208 19:33:16.955965 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 08 19:33:16 crc kubenswrapper[5125]: I1208 19:33:16.959474 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 08 19:33:17 crc kubenswrapper[5125]: I1208 19:33:17.113184 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:17 crc kubenswrapper[5125]: I1208 19:33:17.121493 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:17 crc kubenswrapper[5125]: I1208 19:33:17.187704 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 08 19:33:17 crc kubenswrapper[5125]: I1208 19:33:17.213456 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 08 19:33:17 crc kubenswrapper[5125]: I1208 19:33:17.216151 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 08 19:33:17 crc kubenswrapper[5125]: I1208 19:33:17.229936 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:17 crc kubenswrapper[5125]: I1208 19:33:17.247276 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:17 crc kubenswrapper[5125]: I1208 19:33:17.320993 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 19:33:17 crc kubenswrapper[5125]: I1208 19:33:17.343399 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 08 19:33:17 crc kubenswrapper[5125]: I1208 19:33:17.548410 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 08 19:33:17 crc kubenswrapper[5125]: I1208 19:33:17.596400 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 08 19:33:17 crc kubenswrapper[5125]: I1208 19:33:17.638280 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 08 19:33:17 crc kubenswrapper[5125]: I1208 19:33:17.676441 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 08 19:33:17 crc kubenswrapper[5125]: I1208 19:33:17.677540 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 08 19:33:17 crc kubenswrapper[5125]: I1208 19:33:17.757161 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 08 19:33:17 crc kubenswrapper[5125]: I1208 19:33:17.866142 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 08 19:33:17 crc kubenswrapper[5125]: I1208 19:33:17.897572 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 08 19:33:17 crc kubenswrapper[5125]: I1208 19:33:17.903270 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 08 19:33:17 crc kubenswrapper[5125]: I1208 19:33:17.930288 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 08 19:33:17 crc kubenswrapper[5125]: I1208 19:33:17.935823 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 08 19:33:17 crc kubenswrapper[5125]: I1208 19:33:17.957709 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 08 19:33:17 crc kubenswrapper[5125]: I1208 19:33:17.959499 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:17 crc kubenswrapper[5125]: I1208 19:33:17.986980 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:17 crc kubenswrapper[5125]: I1208 19:33:17.992961 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 08 19:33:18 crc kubenswrapper[5125]: I1208 19:33:18.117056 5125 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 08 19:33:18 crc kubenswrapper[5125]: I1208 19:33:18.117556 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://f38e8772336fb936b11ac92000c9d5e8a3bba4479c7d63f39833e2c4b5cee834" gracePeriod=5 Dec 08 19:33:18 crc kubenswrapper[5125]: I1208 19:33:18.241682 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 08 19:33:18 crc kubenswrapper[5125]: I1208 19:33:18.286602 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 08 19:33:18 crc kubenswrapper[5125]: I1208 19:33:18.366146 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:18 crc kubenswrapper[5125]: I1208 19:33:18.444906 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 08 19:33:18 crc kubenswrapper[5125]: I1208 19:33:18.460021 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 08 19:33:18 crc kubenswrapper[5125]: I1208 19:33:18.462591 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 08 19:33:18 crc kubenswrapper[5125]: I1208 19:33:18.481105 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:18 crc kubenswrapper[5125]: I1208 19:33:18.543333 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:18 crc kubenswrapper[5125]: I1208 19:33:18.561501 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 08 19:33:18 crc kubenswrapper[5125]: I1208 19:33:18.573685 5125 ???:1] "http: TLS handshake error from 192.168.126.11:34110: no serving certificate available for the kubelet" Dec 08 19:33:18 crc kubenswrapper[5125]: I1208 19:33:18.593697 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 08 19:33:18 crc kubenswrapper[5125]: I1208 19:33:18.624156 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:33:18 crc kubenswrapper[5125]: I1208 19:33:18.667929 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 08 19:33:18 crc kubenswrapper[5125]: I1208 19:33:18.734458 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 08 19:33:18 crc kubenswrapper[5125]: I1208 19:33:18.916242 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 08 19:33:19 crc kubenswrapper[5125]: I1208 19:33:19.055998 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 08 19:33:19 crc kubenswrapper[5125]: I1208 19:33:19.131584 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:19 crc kubenswrapper[5125]: I1208 19:33:19.146863 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 08 19:33:19 crc kubenswrapper[5125]: I1208 19:33:19.170630 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 08 19:33:19 crc kubenswrapper[5125]: I1208 19:33:19.186163 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 08 19:33:19 crc kubenswrapper[5125]: I1208 19:33:19.525864 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 08 19:33:19 crc kubenswrapper[5125]: I1208 19:33:19.632981 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 08 19:33:19 crc kubenswrapper[5125]: I1208 19:33:19.679047 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 19:33:19 crc kubenswrapper[5125]: I1208 19:33:19.703772 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 08 19:33:19 crc kubenswrapper[5125]: I1208 19:33:19.750204 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 08 19:33:19 crc kubenswrapper[5125]: I1208 19:33:19.816798 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 08 19:33:20 crc kubenswrapper[5125]: I1208 19:33:20.026030 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 08 19:33:20 crc kubenswrapper[5125]: I1208 19:33:20.089712 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 08 19:33:20 crc kubenswrapper[5125]: I1208 19:33:20.326097 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 08 19:33:20 crc kubenswrapper[5125]: I1208 19:33:20.619858 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 08 19:33:20 crc kubenswrapper[5125]: I1208 19:33:20.807088 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 08 19:33:20 crc kubenswrapper[5125]: I1208 19:33:20.810526 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 08 19:33:20 crc kubenswrapper[5125]: I1208 19:33:20.811510 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 08 19:33:20 crc kubenswrapper[5125]: I1208 19:33:20.898991 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 08 19:33:21 crc kubenswrapper[5125]: I1208 19:33:21.101456 5125 patch_prober.go:28] interesting pod/machine-config-daemon-slhjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:33:21 crc kubenswrapper[5125]: I1208 19:33:21.101563 5125 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" podUID="d8cea827-b8e3-4d92-adea-df0afd2397da" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:33:21 crc kubenswrapper[5125]: I1208 19:33:21.270233 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 08 19:33:21 crc kubenswrapper[5125]: I1208 19:33:21.532413 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 08 19:33:21 crc kubenswrapper[5125]: I1208 19:33:21.594919 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 08 19:33:22 crc kubenswrapper[5125]: I1208 19:33:22.355242 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 08 19:33:23 crc kubenswrapper[5125]: I1208 19:33:23.624278 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 08 19:33:23 crc kubenswrapper[5125]: I1208 19:33:23.684497 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 08 19:33:23 crc kubenswrapper[5125]: I1208 19:33:23.684567 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:33:23 crc kubenswrapper[5125]: I1208 19:33:23.686146 5125 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 08 19:33:23 crc kubenswrapper[5125]: I1208 19:33:23.771970 5125 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 08 19:33:23 crc kubenswrapper[5125]: I1208 19:33:23.841818 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 19:33:23 crc kubenswrapper[5125]: I1208 19:33:23.841904 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 19:33:23 crc kubenswrapper[5125]: I1208 19:33:23.841931 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 19:33:23 crc kubenswrapper[5125]: I1208 19:33:23.842000 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:33:23 crc kubenswrapper[5125]: I1208 19:33:23.842029 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 19:33:23 crc kubenswrapper[5125]: I1208 19:33:23.842044 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 19:33:23 crc kubenswrapper[5125]: I1208 19:33:23.842076 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:33:23 crc kubenswrapper[5125]: I1208 19:33:23.842090 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:33:23 crc kubenswrapper[5125]: I1208 19:33:23.842143 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:33:23 crc kubenswrapper[5125]: I1208 19:33:23.842497 5125 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:23 crc kubenswrapper[5125]: I1208 19:33:23.842515 5125 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:23 crc kubenswrapper[5125]: I1208 19:33:23.842524 5125 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:23 crc kubenswrapper[5125]: I1208 19:33:23.842533 5125 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:23 crc kubenswrapper[5125]: I1208 19:33:23.849756 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:33:23 crc kubenswrapper[5125]: I1208 19:33:23.944218 5125 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:24 crc kubenswrapper[5125]: I1208 19:33:24.204166 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 08 19:33:24 crc kubenswrapper[5125]: I1208 19:33:24.204223 5125 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="f38e8772336fb936b11ac92000c9d5e8a3bba4479c7d63f39833e2c4b5cee834" exitCode=137 Dec 08 19:33:24 crc kubenswrapper[5125]: I1208 19:33:24.204329 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 19:33:24 crc kubenswrapper[5125]: I1208 19:33:24.204344 5125 scope.go:117] "RemoveContainer" containerID="f38e8772336fb936b11ac92000c9d5e8a3bba4479c7d63f39833e2c4b5cee834" Dec 08 19:33:24 crc kubenswrapper[5125]: I1208 19:33:24.205675 5125 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 08 19:33:24 crc kubenswrapper[5125]: I1208 19:33:24.220513 5125 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 08 19:33:24 crc kubenswrapper[5125]: I1208 19:33:24.221802 5125 scope.go:117] "RemoveContainer" containerID="f38e8772336fb936b11ac92000c9d5e8a3bba4479c7d63f39833e2c4b5cee834" Dec 08 19:33:24 crc kubenswrapper[5125]: E1208 19:33:24.222167 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f38e8772336fb936b11ac92000c9d5e8a3bba4479c7d63f39833e2c4b5cee834\": container with ID starting with f38e8772336fb936b11ac92000c9d5e8a3bba4479c7d63f39833e2c4b5cee834 not found: ID does not exist" containerID="f38e8772336fb936b11ac92000c9d5e8a3bba4479c7d63f39833e2c4b5cee834" Dec 08 19:33:24 crc kubenswrapper[5125]: I1208 19:33:24.222201 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f38e8772336fb936b11ac92000c9d5e8a3bba4479c7d63f39833e2c4b5cee834"} err="failed to get container status \"f38e8772336fb936b11ac92000c9d5e8a3bba4479c7d63f39833e2c4b5cee834\": rpc error: code = NotFound desc = could not find container \"f38e8772336fb936b11ac92000c9d5e8a3bba4479c7d63f39833e2c4b5cee834\": container with ID starting with f38e8772336fb936b11ac92000c9d5e8a3bba4479c7d63f39833e2c4b5cee834 not found: ID does not exist" Dec 08 19:33:25 crc kubenswrapper[5125]: I1208 19:33:25.776844 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.185478 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gs6mc"] Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.186411 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gs6mc" podUID="9e9aba28-961e-4643-92d8-d718748862c6" containerName="registry-server" containerID="cri-o://39057419e2efc66299ed5b859d40e1267fa834f80e7259b9e3c0df86a7c20f26" gracePeriod=30 Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.194860 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c5dng"] Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.195167 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c5dng" podUID="edf1ad5e-15fa-4885-be31-4124514570a1" containerName="registry-server" containerID="cri-o://8c7f66b7389391cb20133fc19153f3525e40584c9118f824faff3c7626c47e49" gracePeriod=30 Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.202741 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-75h8s"] Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.204422 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-75h8s" podUID="77083b49-6a76-42e1-9f35-4b34306c23d3" containerName="marketplace-operator" containerID="cri-o://888b9328d7f7f9d29a4a3c3048a8ca56fc7b82b46ffa17f29d175421683f52dd" gracePeriod=30 Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.221058 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnqn9"] Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.221423 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cnqn9" podUID="250d3433-c9c9-4cc2-b0ff-fae4f22615b3" containerName="registry-server" containerID="cri-o://edec98544f516fe01b992547c72e105c98d98ee479f25f01acd725ce56e6f9c3" gracePeriod=30 Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.230854 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fgxfn"] Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.231251 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fgxfn" podUID="84e9ab89-5847-44a9-b4d5-11fd35eea65f" containerName="registry-server" containerID="cri-o://ff79163aee5978f0e25125c23263668107ae17d488fe6d6099be451c47d26c98" gracePeriod=30 Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.241357 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-9vtxw"] Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.242153 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.242186 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.242205 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d84464a9-ebd2-4e20-8196-6d468034e0cc" containerName="installer" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.242214 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="d84464a9-ebd2-4e20-8196-6d468034e0cc" containerName="installer" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.242349 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.242361 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="d84464a9-ebd2-4e20-8196-6d468034e0cc" containerName="installer" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.246908 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-9vtxw" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.259526 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-9vtxw"] Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.351787 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1f860e3e-558a-46f2-91eb-ad626e827732-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-9vtxw\" (UID: \"1f860e3e-558a-46f2-91eb-ad626e827732\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9vtxw" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.352225 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f860e3e-558a-46f2-91eb-ad626e827732-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-9vtxw\" (UID: \"1f860e3e-558a-46f2-91eb-ad626e827732\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9vtxw" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.352287 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klz5n\" (UniqueName: \"kubernetes.io/projected/1f860e3e-558a-46f2-91eb-ad626e827732-kube-api-access-klz5n\") pod \"marketplace-operator-547dbd544d-9vtxw\" (UID: \"1f860e3e-558a-46f2-91eb-ad626e827732\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9vtxw" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.352323 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1f860e3e-558a-46f2-91eb-ad626e827732-tmp\") pod \"marketplace-operator-547dbd544d-9vtxw\" (UID: \"1f860e3e-558a-46f2-91eb-ad626e827732\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9vtxw" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.453411 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-klz5n\" (UniqueName: \"kubernetes.io/projected/1f860e3e-558a-46f2-91eb-ad626e827732-kube-api-access-klz5n\") pod \"marketplace-operator-547dbd544d-9vtxw\" (UID: \"1f860e3e-558a-46f2-91eb-ad626e827732\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9vtxw" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.453455 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1f860e3e-558a-46f2-91eb-ad626e827732-tmp\") pod \"marketplace-operator-547dbd544d-9vtxw\" (UID: \"1f860e3e-558a-46f2-91eb-ad626e827732\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9vtxw" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.453519 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1f860e3e-558a-46f2-91eb-ad626e827732-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-9vtxw\" (UID: \"1f860e3e-558a-46f2-91eb-ad626e827732\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9vtxw" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.453569 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f860e3e-558a-46f2-91eb-ad626e827732-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-9vtxw\" (UID: \"1f860e3e-558a-46f2-91eb-ad626e827732\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9vtxw" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.454392 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1f860e3e-558a-46f2-91eb-ad626e827732-tmp\") pod \"marketplace-operator-547dbd544d-9vtxw\" (UID: \"1f860e3e-558a-46f2-91eb-ad626e827732\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9vtxw" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.454872 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1f860e3e-558a-46f2-91eb-ad626e827732-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-9vtxw\" (UID: \"1f860e3e-558a-46f2-91eb-ad626e827732\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9vtxw" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.461762 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1f860e3e-558a-46f2-91eb-ad626e827732-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-9vtxw\" (UID: \"1f860e3e-558a-46f2-91eb-ad626e827732\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9vtxw" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.469738 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-klz5n\" (UniqueName: \"kubernetes.io/projected/1f860e3e-558a-46f2-91eb-ad626e827732-kube-api-access-klz5n\") pod \"marketplace-operator-547dbd544d-9vtxw\" (UID: \"1f860e3e-558a-46f2-91eb-ad626e827732\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-9vtxw" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.592263 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-9vtxw" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.596173 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c5dng" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.604468 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gs6mc" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.611014 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-75h8s" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.636737 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fgxfn" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.648188 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cnqn9" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.656364 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77083b49-6a76-42e1-9f35-4b34306c23d3-tmp" (OuterVolumeSpecName: "tmp") pod "77083b49-6a76-42e1-9f35-4b34306c23d3" (UID: "77083b49-6a76-42e1-9f35-4b34306c23d3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.655774 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/77083b49-6a76-42e1-9f35-4b34306c23d3-tmp\") pod \"77083b49-6a76-42e1-9f35-4b34306c23d3\" (UID: \"77083b49-6a76-42e1-9f35-4b34306c23d3\") " Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.656466 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e9aba28-961e-4643-92d8-d718748862c6-utilities\") pod \"9e9aba28-961e-4643-92d8-d718748862c6\" (UID: \"9e9aba28-961e-4643-92d8-d718748862c6\") " Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.656492 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rflnr\" (UniqueName: \"kubernetes.io/projected/edf1ad5e-15fa-4885-be31-4124514570a1-kube-api-access-rflnr\") pod \"edf1ad5e-15fa-4885-be31-4124514570a1\" (UID: \"edf1ad5e-15fa-4885-be31-4124514570a1\") " Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.663885 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edf1ad5e-15fa-4885-be31-4124514570a1-catalog-content\") pod \"edf1ad5e-15fa-4885-be31-4124514570a1\" (UID: \"edf1ad5e-15fa-4885-be31-4124514570a1\") " Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.663954 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e9aba28-961e-4643-92d8-d718748862c6-catalog-content\") pod \"9e9aba28-961e-4643-92d8-d718748862c6\" (UID: \"9e9aba28-961e-4643-92d8-d718748862c6\") " Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.664133 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/77083b49-6a76-42e1-9f35-4b34306c23d3-marketplace-operator-metrics\") pod \"77083b49-6a76-42e1-9f35-4b34306c23d3\" (UID: \"77083b49-6a76-42e1-9f35-4b34306c23d3\") " Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.664161 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84e9ab89-5847-44a9-b4d5-11fd35eea65f-catalog-content\") pod \"84e9ab89-5847-44a9-b4d5-11fd35eea65f\" (UID: \"84e9ab89-5847-44a9-b4d5-11fd35eea65f\") " Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.664186 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bw68\" (UniqueName: \"kubernetes.io/projected/77083b49-6a76-42e1-9f35-4b34306c23d3-kube-api-access-6bw68\") pod \"77083b49-6a76-42e1-9f35-4b34306c23d3\" (UID: \"77083b49-6a76-42e1-9f35-4b34306c23d3\") " Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.664233 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjd9b\" (UniqueName: \"kubernetes.io/projected/84e9ab89-5847-44a9-b4d5-11fd35eea65f-kube-api-access-pjd9b\") pod \"84e9ab89-5847-44a9-b4d5-11fd35eea65f\" (UID: \"84e9ab89-5847-44a9-b4d5-11fd35eea65f\") " Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.664288 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/77083b49-6a76-42e1-9f35-4b34306c23d3-marketplace-trusted-ca\") pod \"77083b49-6a76-42e1-9f35-4b34306c23d3\" (UID: \"77083b49-6a76-42e1-9f35-4b34306c23d3\") " Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.664316 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8vrv\" (UniqueName: \"kubernetes.io/projected/9e9aba28-961e-4643-92d8-d718748862c6-kube-api-access-q8vrv\") pod \"9e9aba28-961e-4643-92d8-d718748862c6\" (UID: \"9e9aba28-961e-4643-92d8-d718748862c6\") " Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.664349 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84e9ab89-5847-44a9-b4d5-11fd35eea65f-utilities\") pod \"84e9ab89-5847-44a9-b4d5-11fd35eea65f\" (UID: \"84e9ab89-5847-44a9-b4d5-11fd35eea65f\") " Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.664384 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edf1ad5e-15fa-4885-be31-4124514570a1-utilities\") pod \"edf1ad5e-15fa-4885-be31-4124514570a1\" (UID: \"edf1ad5e-15fa-4885-be31-4124514570a1\") " Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.665264 5125 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/77083b49-6a76-42e1-9f35-4b34306c23d3-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.663084 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edf1ad5e-15fa-4885-be31-4124514570a1-kube-api-access-rflnr" (OuterVolumeSpecName: "kube-api-access-rflnr") pod "edf1ad5e-15fa-4885-be31-4124514570a1" (UID: "edf1ad5e-15fa-4885-be31-4124514570a1"). InnerVolumeSpecName "kube-api-access-rflnr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.663764 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9aba28-961e-4643-92d8-d718748862c6-utilities" (OuterVolumeSpecName: "utilities") pod "9e9aba28-961e-4643-92d8-d718748862c6" (UID: "9e9aba28-961e-4643-92d8-d718748862c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.667632 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77083b49-6a76-42e1-9f35-4b34306c23d3-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "77083b49-6a76-42e1-9f35-4b34306c23d3" (UID: "77083b49-6a76-42e1-9f35-4b34306c23d3"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.668400 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84e9ab89-5847-44a9-b4d5-11fd35eea65f-utilities" (OuterVolumeSpecName: "utilities") pod "84e9ab89-5847-44a9-b4d5-11fd35eea65f" (UID: "84e9ab89-5847-44a9-b4d5-11fd35eea65f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.668769 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edf1ad5e-15fa-4885-be31-4124514570a1-utilities" (OuterVolumeSpecName: "utilities") pod "edf1ad5e-15fa-4885-be31-4124514570a1" (UID: "edf1ad5e-15fa-4885-be31-4124514570a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.671709 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84e9ab89-5847-44a9-b4d5-11fd35eea65f-kube-api-access-pjd9b" (OuterVolumeSpecName: "kube-api-access-pjd9b") pod "84e9ab89-5847-44a9-b4d5-11fd35eea65f" (UID: "84e9ab89-5847-44a9-b4d5-11fd35eea65f"). InnerVolumeSpecName "kube-api-access-pjd9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.674586 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77083b49-6a76-42e1-9f35-4b34306c23d3-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "77083b49-6a76-42e1-9f35-4b34306c23d3" (UID: "77083b49-6a76-42e1-9f35-4b34306c23d3"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.676524 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77083b49-6a76-42e1-9f35-4b34306c23d3-kube-api-access-6bw68" (OuterVolumeSpecName: "kube-api-access-6bw68") pod "77083b49-6a76-42e1-9f35-4b34306c23d3" (UID: "77083b49-6a76-42e1-9f35-4b34306c23d3"). InnerVolumeSpecName "kube-api-access-6bw68". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.682781 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9aba28-961e-4643-92d8-d718748862c6-kube-api-access-q8vrv" (OuterVolumeSpecName: "kube-api-access-q8vrv") pod "9e9aba28-961e-4643-92d8-d718748862c6" (UID: "9e9aba28-961e-4643-92d8-d718748862c6"). InnerVolumeSpecName "kube-api-access-q8vrv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.742027 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edf1ad5e-15fa-4885-be31-4124514570a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "edf1ad5e-15fa-4885-be31-4124514570a1" (UID: "edf1ad5e-15fa-4885-be31-4124514570a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.750915 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9aba28-961e-4643-92d8-d718748862c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e9aba28-961e-4643-92d8-d718748862c6" (UID: "9e9aba28-961e-4643-92d8-d718748862c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.766452 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9mss\" (UniqueName: \"kubernetes.io/projected/250d3433-c9c9-4cc2-b0ff-fae4f22615b3-kube-api-access-b9mss\") pod \"250d3433-c9c9-4cc2-b0ff-fae4f22615b3\" (UID: \"250d3433-c9c9-4cc2-b0ff-fae4f22615b3\") " Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.766574 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/250d3433-c9c9-4cc2-b0ff-fae4f22615b3-catalog-content\") pod \"250d3433-c9c9-4cc2-b0ff-fae4f22615b3\" (UID: \"250d3433-c9c9-4cc2-b0ff-fae4f22615b3\") " Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.766684 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/250d3433-c9c9-4cc2-b0ff-fae4f22615b3-utilities\") pod \"250d3433-c9c9-4cc2-b0ff-fae4f22615b3\" (UID: \"250d3433-c9c9-4cc2-b0ff-fae4f22615b3\") " Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.766892 5125 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/77083b49-6a76-42e1-9f35-4b34306c23d3-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.766906 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6bw68\" (UniqueName: \"kubernetes.io/projected/77083b49-6a76-42e1-9f35-4b34306c23d3-kube-api-access-6bw68\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.766919 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pjd9b\" (UniqueName: \"kubernetes.io/projected/84e9ab89-5847-44a9-b4d5-11fd35eea65f-kube-api-access-pjd9b\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.766932 5125 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/77083b49-6a76-42e1-9f35-4b34306c23d3-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.766943 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q8vrv\" (UniqueName: \"kubernetes.io/projected/9e9aba28-961e-4643-92d8-d718748862c6-kube-api-access-q8vrv\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.766954 5125 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84e9ab89-5847-44a9-b4d5-11fd35eea65f-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.766965 5125 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edf1ad5e-15fa-4885-be31-4124514570a1-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.766976 5125 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e9aba28-961e-4643-92d8-d718748862c6-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.766986 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rflnr\" (UniqueName: \"kubernetes.io/projected/edf1ad5e-15fa-4885-be31-4124514570a1-kube-api-access-rflnr\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.766998 5125 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edf1ad5e-15fa-4885-be31-4124514570a1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.767009 5125 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e9aba28-961e-4643-92d8-d718748862c6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.768047 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/250d3433-c9c9-4cc2-b0ff-fae4f22615b3-utilities" (OuterVolumeSpecName: "utilities") pod "250d3433-c9c9-4cc2-b0ff-fae4f22615b3" (UID: "250d3433-c9c9-4cc2-b0ff-fae4f22615b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.769748 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/250d3433-c9c9-4cc2-b0ff-fae4f22615b3-kube-api-access-b9mss" (OuterVolumeSpecName: "kube-api-access-b9mss") pod "250d3433-c9c9-4cc2-b0ff-fae4f22615b3" (UID: "250d3433-c9c9-4cc2-b0ff-fae4f22615b3"). InnerVolumeSpecName "kube-api-access-b9mss". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.778070 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/250d3433-c9c9-4cc2-b0ff-fae4f22615b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "250d3433-c9c9-4cc2-b0ff-fae4f22615b3" (UID: "250d3433-c9c9-4cc2-b0ff-fae4f22615b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.808821 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-9vtxw"] Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.816881 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84e9ab89-5847-44a9-b4d5-11fd35eea65f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "84e9ab89-5847-44a9-b4d5-11fd35eea65f" (UID: "84e9ab89-5847-44a9-b4d5-11fd35eea65f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.867953 5125 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84e9ab89-5847-44a9-b4d5-11fd35eea65f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.867988 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b9mss\" (UniqueName: \"kubernetes.io/projected/250d3433-c9c9-4cc2-b0ff-fae4f22615b3-kube-api-access-b9mss\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.868004 5125 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/250d3433-c9c9-4cc2-b0ff-fae4f22615b3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:31 crc kubenswrapper[5125]: I1208 19:33:31.868015 5125 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/250d3433-c9c9-4cc2-b0ff-fae4f22615b3-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.250457 5125 generic.go:358] "Generic (PLEG): container finished" podID="250d3433-c9c9-4cc2-b0ff-fae4f22615b3" containerID="edec98544f516fe01b992547c72e105c98d98ee479f25f01acd725ce56e6f9c3" exitCode=0 Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.250787 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cnqn9" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.250666 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnqn9" event={"ID":"250d3433-c9c9-4cc2-b0ff-fae4f22615b3","Type":"ContainerDied","Data":"edec98544f516fe01b992547c72e105c98d98ee479f25f01acd725ce56e6f9c3"} Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.250857 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnqn9" event={"ID":"250d3433-c9c9-4cc2-b0ff-fae4f22615b3","Type":"ContainerDied","Data":"92510f5548b2ab221a44f7b9e35d68d49b55b0c07e3f0b66cd53f16972b28bc3"} Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.250888 5125 scope.go:117] "RemoveContainer" containerID="edec98544f516fe01b992547c72e105c98d98ee479f25f01acd725ce56e6f9c3" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.255067 5125 generic.go:358] "Generic (PLEG): container finished" podID="84e9ab89-5847-44a9-b4d5-11fd35eea65f" containerID="ff79163aee5978f0e25125c23263668107ae17d488fe6d6099be451c47d26c98" exitCode=0 Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.255145 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fgxfn" event={"ID":"84e9ab89-5847-44a9-b4d5-11fd35eea65f","Type":"ContainerDied","Data":"ff79163aee5978f0e25125c23263668107ae17d488fe6d6099be451c47d26c98"} Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.255166 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fgxfn" event={"ID":"84e9ab89-5847-44a9-b4d5-11fd35eea65f","Type":"ContainerDied","Data":"b1656d9dc3295ca8035fe6afec300e02e8f23b638bf7eacf6c8fde8fb97f78b3"} Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.255292 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fgxfn" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.269067 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-9vtxw" event={"ID":"1f860e3e-558a-46f2-91eb-ad626e827732","Type":"ContainerStarted","Data":"dffc66a9c282f51654bcbe4c7f5130f070a47411f814abc3fd773201034bc675"} Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.269123 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-9vtxw" event={"ID":"1f860e3e-558a-46f2-91eb-ad626e827732","Type":"ContainerStarted","Data":"cde1d7ffe133415860367d92a6725a8e6aa9cd12aa4b6fd161b0de1638924fa2"} Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.269748 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-9vtxw" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.275096 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-9vtxw" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.276242 5125 generic.go:358] "Generic (PLEG): container finished" podID="9e9aba28-961e-4643-92d8-d718748862c6" containerID="39057419e2efc66299ed5b859d40e1267fa834f80e7259b9e3c0df86a7c20f26" exitCode=0 Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.276435 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gs6mc" event={"ID":"9e9aba28-961e-4643-92d8-d718748862c6","Type":"ContainerDied","Data":"39057419e2efc66299ed5b859d40e1267fa834f80e7259b9e3c0df86a7c20f26"} Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.276475 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gs6mc" event={"ID":"9e9aba28-961e-4643-92d8-d718748862c6","Type":"ContainerDied","Data":"43c498fba30fa51637dd7805a839eac4cad54e53e1b9bf6142bf6496e135824b"} Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.276475 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gs6mc" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.280801 5125 generic.go:358] "Generic (PLEG): container finished" podID="77083b49-6a76-42e1-9f35-4b34306c23d3" containerID="888b9328d7f7f9d29a4a3c3048a8ca56fc7b82b46ffa17f29d175421683f52dd" exitCode=0 Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.280967 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-75h8s" event={"ID":"77083b49-6a76-42e1-9f35-4b34306c23d3","Type":"ContainerDied","Data":"888b9328d7f7f9d29a4a3c3048a8ca56fc7b82b46ffa17f29d175421683f52dd"} Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.281017 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-75h8s" event={"ID":"77083b49-6a76-42e1-9f35-4b34306c23d3","Type":"ContainerDied","Data":"0745172ec75d8a72ba45ea814ca563051989e1f665af3892649a2166bca58b37"} Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.281174 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-75h8s" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.284918 5125 generic.go:358] "Generic (PLEG): container finished" podID="edf1ad5e-15fa-4885-be31-4124514570a1" containerID="8c7f66b7389391cb20133fc19153f3525e40584c9118f824faff3c7626c47e49" exitCode=0 Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.285100 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5dng" event={"ID":"edf1ad5e-15fa-4885-be31-4124514570a1","Type":"ContainerDied","Data":"8c7f66b7389391cb20133fc19153f3525e40584c9118f824faff3c7626c47e49"} Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.285205 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c5dng" event={"ID":"edf1ad5e-15fa-4885-be31-4124514570a1","Type":"ContainerDied","Data":"95c0843581cabb481674723fc11704b09c4375695b040b97e6fd01d6c109619a"} Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.285384 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c5dng" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.288872 5125 scope.go:117] "RemoveContainer" containerID="440f23ce25b2e51813b469e6fe7252478cb27152fa938eee16281b7c3cd64926" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.300441 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnqn9"] Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.304454 5125 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnqn9"] Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.312439 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-9vtxw" podStartSLOduration=1.312391992 podStartE2EDuration="1.312391992s" podCreationTimestamp="2025-12-08 19:33:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:33:32.294901845 +0000 UTC m=+269.065392129" watchObservedRunningTime="2025-12-08 19:33:32.312391992 +0000 UTC m=+269.082882266" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.320644 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c5dng"] Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.321142 5125 scope.go:117] "RemoveContainer" containerID="a038b9e03bb343183e4fd6c6b8e031a8733e2f8501df0156b42cc0f9857009f0" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.326795 5125 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c5dng"] Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.330731 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gs6mc"] Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.339226 5125 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gs6mc"] Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.340939 5125 scope.go:117] "RemoveContainer" containerID="edec98544f516fe01b992547c72e105c98d98ee479f25f01acd725ce56e6f9c3" Dec 08 19:33:32 crc kubenswrapper[5125]: E1208 19:33:32.341447 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edec98544f516fe01b992547c72e105c98d98ee479f25f01acd725ce56e6f9c3\": container with ID starting with edec98544f516fe01b992547c72e105c98d98ee479f25f01acd725ce56e6f9c3 not found: ID does not exist" containerID="edec98544f516fe01b992547c72e105c98d98ee479f25f01acd725ce56e6f9c3" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.341478 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edec98544f516fe01b992547c72e105c98d98ee479f25f01acd725ce56e6f9c3"} err="failed to get container status \"edec98544f516fe01b992547c72e105c98d98ee479f25f01acd725ce56e6f9c3\": rpc error: code = NotFound desc = could not find container \"edec98544f516fe01b992547c72e105c98d98ee479f25f01acd725ce56e6f9c3\": container with ID starting with edec98544f516fe01b992547c72e105c98d98ee479f25f01acd725ce56e6f9c3 not found: ID does not exist" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.341501 5125 scope.go:117] "RemoveContainer" containerID="440f23ce25b2e51813b469e6fe7252478cb27152fa938eee16281b7c3cd64926" Dec 08 19:33:32 crc kubenswrapper[5125]: E1208 19:33:32.344681 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"440f23ce25b2e51813b469e6fe7252478cb27152fa938eee16281b7c3cd64926\": container with ID starting with 440f23ce25b2e51813b469e6fe7252478cb27152fa938eee16281b7c3cd64926 not found: ID does not exist" containerID="440f23ce25b2e51813b469e6fe7252478cb27152fa938eee16281b7c3cd64926" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.344710 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"440f23ce25b2e51813b469e6fe7252478cb27152fa938eee16281b7c3cd64926"} err="failed to get container status \"440f23ce25b2e51813b469e6fe7252478cb27152fa938eee16281b7c3cd64926\": rpc error: code = NotFound desc = could not find container \"440f23ce25b2e51813b469e6fe7252478cb27152fa938eee16281b7c3cd64926\": container with ID starting with 440f23ce25b2e51813b469e6fe7252478cb27152fa938eee16281b7c3cd64926 not found: ID does not exist" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.344731 5125 scope.go:117] "RemoveContainer" containerID="a038b9e03bb343183e4fd6c6b8e031a8733e2f8501df0156b42cc0f9857009f0" Dec 08 19:33:32 crc kubenswrapper[5125]: E1208 19:33:32.344981 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a038b9e03bb343183e4fd6c6b8e031a8733e2f8501df0156b42cc0f9857009f0\": container with ID starting with a038b9e03bb343183e4fd6c6b8e031a8733e2f8501df0156b42cc0f9857009f0 not found: ID does not exist" containerID="a038b9e03bb343183e4fd6c6b8e031a8733e2f8501df0156b42cc0f9857009f0" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.345004 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a038b9e03bb343183e4fd6c6b8e031a8733e2f8501df0156b42cc0f9857009f0"} err="failed to get container status \"a038b9e03bb343183e4fd6c6b8e031a8733e2f8501df0156b42cc0f9857009f0\": rpc error: code = NotFound desc = could not find container \"a038b9e03bb343183e4fd6c6b8e031a8733e2f8501df0156b42cc0f9857009f0\": container with ID starting with a038b9e03bb343183e4fd6c6b8e031a8733e2f8501df0156b42cc0f9857009f0 not found: ID does not exist" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.345020 5125 scope.go:117] "RemoveContainer" containerID="ff79163aee5978f0e25125c23263668107ae17d488fe6d6099be451c47d26c98" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.346147 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-75h8s"] Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.350348 5125 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-75h8s"] Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.353620 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fgxfn"] Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.356922 5125 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fgxfn"] Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.381547 5125 scope.go:117] "RemoveContainer" containerID="b820d5bf6cad23411b3f27e588ffd5e06f01d2058473ecbd5324bf8c6447f307" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.397742 5125 scope.go:117] "RemoveContainer" containerID="d755ce65a9a4a84839788ccc020b80d1a2cb94429fe1e45b3f1891c5730c0cb4" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.423866 5125 scope.go:117] "RemoveContainer" containerID="ff79163aee5978f0e25125c23263668107ae17d488fe6d6099be451c47d26c98" Dec 08 19:33:32 crc kubenswrapper[5125]: E1208 19:33:32.424317 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff79163aee5978f0e25125c23263668107ae17d488fe6d6099be451c47d26c98\": container with ID starting with ff79163aee5978f0e25125c23263668107ae17d488fe6d6099be451c47d26c98 not found: ID does not exist" containerID="ff79163aee5978f0e25125c23263668107ae17d488fe6d6099be451c47d26c98" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.424368 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff79163aee5978f0e25125c23263668107ae17d488fe6d6099be451c47d26c98"} err="failed to get container status \"ff79163aee5978f0e25125c23263668107ae17d488fe6d6099be451c47d26c98\": rpc error: code = NotFound desc = could not find container \"ff79163aee5978f0e25125c23263668107ae17d488fe6d6099be451c47d26c98\": container with ID starting with ff79163aee5978f0e25125c23263668107ae17d488fe6d6099be451c47d26c98 not found: ID does not exist" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.424399 5125 scope.go:117] "RemoveContainer" containerID="b820d5bf6cad23411b3f27e588ffd5e06f01d2058473ecbd5324bf8c6447f307" Dec 08 19:33:32 crc kubenswrapper[5125]: E1208 19:33:32.424965 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b820d5bf6cad23411b3f27e588ffd5e06f01d2058473ecbd5324bf8c6447f307\": container with ID starting with b820d5bf6cad23411b3f27e588ffd5e06f01d2058473ecbd5324bf8c6447f307 not found: ID does not exist" containerID="b820d5bf6cad23411b3f27e588ffd5e06f01d2058473ecbd5324bf8c6447f307" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.425012 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b820d5bf6cad23411b3f27e588ffd5e06f01d2058473ecbd5324bf8c6447f307"} err="failed to get container status \"b820d5bf6cad23411b3f27e588ffd5e06f01d2058473ecbd5324bf8c6447f307\": rpc error: code = NotFound desc = could not find container \"b820d5bf6cad23411b3f27e588ffd5e06f01d2058473ecbd5324bf8c6447f307\": container with ID starting with b820d5bf6cad23411b3f27e588ffd5e06f01d2058473ecbd5324bf8c6447f307 not found: ID does not exist" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.425039 5125 scope.go:117] "RemoveContainer" containerID="d755ce65a9a4a84839788ccc020b80d1a2cb94429fe1e45b3f1891c5730c0cb4" Dec 08 19:33:32 crc kubenswrapper[5125]: E1208 19:33:32.425337 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d755ce65a9a4a84839788ccc020b80d1a2cb94429fe1e45b3f1891c5730c0cb4\": container with ID starting with d755ce65a9a4a84839788ccc020b80d1a2cb94429fe1e45b3f1891c5730c0cb4 not found: ID does not exist" containerID="d755ce65a9a4a84839788ccc020b80d1a2cb94429fe1e45b3f1891c5730c0cb4" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.425364 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d755ce65a9a4a84839788ccc020b80d1a2cb94429fe1e45b3f1891c5730c0cb4"} err="failed to get container status \"d755ce65a9a4a84839788ccc020b80d1a2cb94429fe1e45b3f1891c5730c0cb4\": rpc error: code = NotFound desc = could not find container \"d755ce65a9a4a84839788ccc020b80d1a2cb94429fe1e45b3f1891c5730c0cb4\": container with ID starting with d755ce65a9a4a84839788ccc020b80d1a2cb94429fe1e45b3f1891c5730c0cb4 not found: ID does not exist" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.425379 5125 scope.go:117] "RemoveContainer" containerID="39057419e2efc66299ed5b859d40e1267fa834f80e7259b9e3c0df86a7c20f26" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.451022 5125 scope.go:117] "RemoveContainer" containerID="924e10d5e2e5ef1151ff56234a047c917d09c730c668dacaef22c2a8cc93dfcf" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.467551 5125 scope.go:117] "RemoveContainer" containerID="6a2d566268cd4f2fc1723697db7cf9bbca185afd1cfc85428bcdc1ac2768e1e6" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.482798 5125 scope.go:117] "RemoveContainer" containerID="39057419e2efc66299ed5b859d40e1267fa834f80e7259b9e3c0df86a7c20f26" Dec 08 19:33:32 crc kubenswrapper[5125]: E1208 19:33:32.483214 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39057419e2efc66299ed5b859d40e1267fa834f80e7259b9e3c0df86a7c20f26\": container with ID starting with 39057419e2efc66299ed5b859d40e1267fa834f80e7259b9e3c0df86a7c20f26 not found: ID does not exist" containerID="39057419e2efc66299ed5b859d40e1267fa834f80e7259b9e3c0df86a7c20f26" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.483261 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39057419e2efc66299ed5b859d40e1267fa834f80e7259b9e3c0df86a7c20f26"} err="failed to get container status \"39057419e2efc66299ed5b859d40e1267fa834f80e7259b9e3c0df86a7c20f26\": rpc error: code = NotFound desc = could not find container \"39057419e2efc66299ed5b859d40e1267fa834f80e7259b9e3c0df86a7c20f26\": container with ID starting with 39057419e2efc66299ed5b859d40e1267fa834f80e7259b9e3c0df86a7c20f26 not found: ID does not exist" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.483288 5125 scope.go:117] "RemoveContainer" containerID="924e10d5e2e5ef1151ff56234a047c917d09c730c668dacaef22c2a8cc93dfcf" Dec 08 19:33:32 crc kubenswrapper[5125]: E1208 19:33:32.483934 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"924e10d5e2e5ef1151ff56234a047c917d09c730c668dacaef22c2a8cc93dfcf\": container with ID starting with 924e10d5e2e5ef1151ff56234a047c917d09c730c668dacaef22c2a8cc93dfcf not found: ID does not exist" containerID="924e10d5e2e5ef1151ff56234a047c917d09c730c668dacaef22c2a8cc93dfcf" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.483967 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"924e10d5e2e5ef1151ff56234a047c917d09c730c668dacaef22c2a8cc93dfcf"} err="failed to get container status \"924e10d5e2e5ef1151ff56234a047c917d09c730c668dacaef22c2a8cc93dfcf\": rpc error: code = NotFound desc = could not find container \"924e10d5e2e5ef1151ff56234a047c917d09c730c668dacaef22c2a8cc93dfcf\": container with ID starting with 924e10d5e2e5ef1151ff56234a047c917d09c730c668dacaef22c2a8cc93dfcf not found: ID does not exist" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.483985 5125 scope.go:117] "RemoveContainer" containerID="6a2d566268cd4f2fc1723697db7cf9bbca185afd1cfc85428bcdc1ac2768e1e6" Dec 08 19:33:32 crc kubenswrapper[5125]: E1208 19:33:32.484184 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a2d566268cd4f2fc1723697db7cf9bbca185afd1cfc85428bcdc1ac2768e1e6\": container with ID starting with 6a2d566268cd4f2fc1723697db7cf9bbca185afd1cfc85428bcdc1ac2768e1e6 not found: ID does not exist" containerID="6a2d566268cd4f2fc1723697db7cf9bbca185afd1cfc85428bcdc1ac2768e1e6" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.484211 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a2d566268cd4f2fc1723697db7cf9bbca185afd1cfc85428bcdc1ac2768e1e6"} err="failed to get container status \"6a2d566268cd4f2fc1723697db7cf9bbca185afd1cfc85428bcdc1ac2768e1e6\": rpc error: code = NotFound desc = could not find container \"6a2d566268cd4f2fc1723697db7cf9bbca185afd1cfc85428bcdc1ac2768e1e6\": container with ID starting with 6a2d566268cd4f2fc1723697db7cf9bbca185afd1cfc85428bcdc1ac2768e1e6 not found: ID does not exist" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.484225 5125 scope.go:117] "RemoveContainer" containerID="888b9328d7f7f9d29a4a3c3048a8ca56fc7b82b46ffa17f29d175421683f52dd" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.497306 5125 scope.go:117] "RemoveContainer" containerID="888b9328d7f7f9d29a4a3c3048a8ca56fc7b82b46ffa17f29d175421683f52dd" Dec 08 19:33:32 crc kubenswrapper[5125]: E1208 19:33:32.497842 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"888b9328d7f7f9d29a4a3c3048a8ca56fc7b82b46ffa17f29d175421683f52dd\": container with ID starting with 888b9328d7f7f9d29a4a3c3048a8ca56fc7b82b46ffa17f29d175421683f52dd not found: ID does not exist" containerID="888b9328d7f7f9d29a4a3c3048a8ca56fc7b82b46ffa17f29d175421683f52dd" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.497874 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"888b9328d7f7f9d29a4a3c3048a8ca56fc7b82b46ffa17f29d175421683f52dd"} err="failed to get container status \"888b9328d7f7f9d29a4a3c3048a8ca56fc7b82b46ffa17f29d175421683f52dd\": rpc error: code = NotFound desc = could not find container \"888b9328d7f7f9d29a4a3c3048a8ca56fc7b82b46ffa17f29d175421683f52dd\": container with ID starting with 888b9328d7f7f9d29a4a3c3048a8ca56fc7b82b46ffa17f29d175421683f52dd not found: ID does not exist" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.497893 5125 scope.go:117] "RemoveContainer" containerID="8c7f66b7389391cb20133fc19153f3525e40584c9118f824faff3c7626c47e49" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.510785 5125 scope.go:117] "RemoveContainer" containerID="4df8ea5564a87cecfac6bc008c168495ad1f9229bfa17e8970a352d081700dc0" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.523514 5125 scope.go:117] "RemoveContainer" containerID="e11b6a4653389b10833c7862eee1a4e830b4e2b2b8da840604b4bb87d2963c37" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.536348 5125 scope.go:117] "RemoveContainer" containerID="8c7f66b7389391cb20133fc19153f3525e40584c9118f824faff3c7626c47e49" Dec 08 19:33:32 crc kubenswrapper[5125]: E1208 19:33:32.536897 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c7f66b7389391cb20133fc19153f3525e40584c9118f824faff3c7626c47e49\": container with ID starting with 8c7f66b7389391cb20133fc19153f3525e40584c9118f824faff3c7626c47e49 not found: ID does not exist" containerID="8c7f66b7389391cb20133fc19153f3525e40584c9118f824faff3c7626c47e49" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.536936 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c7f66b7389391cb20133fc19153f3525e40584c9118f824faff3c7626c47e49"} err="failed to get container status \"8c7f66b7389391cb20133fc19153f3525e40584c9118f824faff3c7626c47e49\": rpc error: code = NotFound desc = could not find container \"8c7f66b7389391cb20133fc19153f3525e40584c9118f824faff3c7626c47e49\": container with ID starting with 8c7f66b7389391cb20133fc19153f3525e40584c9118f824faff3c7626c47e49 not found: ID does not exist" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.536962 5125 scope.go:117] "RemoveContainer" containerID="4df8ea5564a87cecfac6bc008c168495ad1f9229bfa17e8970a352d081700dc0" Dec 08 19:33:32 crc kubenswrapper[5125]: E1208 19:33:32.537294 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4df8ea5564a87cecfac6bc008c168495ad1f9229bfa17e8970a352d081700dc0\": container with ID starting with 4df8ea5564a87cecfac6bc008c168495ad1f9229bfa17e8970a352d081700dc0 not found: ID does not exist" containerID="4df8ea5564a87cecfac6bc008c168495ad1f9229bfa17e8970a352d081700dc0" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.537333 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4df8ea5564a87cecfac6bc008c168495ad1f9229bfa17e8970a352d081700dc0"} err="failed to get container status \"4df8ea5564a87cecfac6bc008c168495ad1f9229bfa17e8970a352d081700dc0\": rpc error: code = NotFound desc = could not find container \"4df8ea5564a87cecfac6bc008c168495ad1f9229bfa17e8970a352d081700dc0\": container with ID starting with 4df8ea5564a87cecfac6bc008c168495ad1f9229bfa17e8970a352d081700dc0 not found: ID does not exist" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.537360 5125 scope.go:117] "RemoveContainer" containerID="e11b6a4653389b10833c7862eee1a4e830b4e2b2b8da840604b4bb87d2963c37" Dec 08 19:33:32 crc kubenswrapper[5125]: E1208 19:33:32.537695 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e11b6a4653389b10833c7862eee1a4e830b4e2b2b8da840604b4bb87d2963c37\": container with ID starting with e11b6a4653389b10833c7862eee1a4e830b4e2b2b8da840604b4bb87d2963c37 not found: ID does not exist" containerID="e11b6a4653389b10833c7862eee1a4e830b4e2b2b8da840604b4bb87d2963c37" Dec 08 19:33:32 crc kubenswrapper[5125]: I1208 19:33:32.537733 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e11b6a4653389b10833c7862eee1a4e830b4e2b2b8da840604b4bb87d2963c37"} err="failed to get container status \"e11b6a4653389b10833c7862eee1a4e830b4e2b2b8da840604b4bb87d2963c37\": rpc error: code = NotFound desc = could not find container \"e11b6a4653389b10833c7862eee1a4e830b4e2b2b8da840604b4bb87d2963c37\": container with ID starting with e11b6a4653389b10833c7862eee1a4e830b4e2b2b8da840604b4bb87d2963c37 not found: ID does not exist" Dec 08 19:33:33 crc kubenswrapper[5125]: I1208 19:33:33.774853 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="250d3433-c9c9-4cc2-b0ff-fae4f22615b3" path="/var/lib/kubelet/pods/250d3433-c9c9-4cc2-b0ff-fae4f22615b3/volumes" Dec 08 19:33:33 crc kubenswrapper[5125]: I1208 19:33:33.775776 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77083b49-6a76-42e1-9f35-4b34306c23d3" path="/var/lib/kubelet/pods/77083b49-6a76-42e1-9f35-4b34306c23d3/volumes" Dec 08 19:33:33 crc kubenswrapper[5125]: I1208 19:33:33.776326 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84e9ab89-5847-44a9-b4d5-11fd35eea65f" path="/var/lib/kubelet/pods/84e9ab89-5847-44a9-b4d5-11fd35eea65f/volumes" Dec 08 19:33:33 crc kubenswrapper[5125]: I1208 19:33:33.777798 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9aba28-961e-4643-92d8-d718748862c6" path="/var/lib/kubelet/pods/9e9aba28-961e-4643-92d8-d718748862c6/volumes" Dec 08 19:33:33 crc kubenswrapper[5125]: I1208 19:33:33.778480 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edf1ad5e-15fa-4885-be31-4124514570a1" path="/var/lib/kubelet/pods/edf1ad5e-15fa-4885-be31-4124514570a1/volumes" Dec 08 19:33:45 crc kubenswrapper[5125]: I1208 19:33:45.207426 5125 ???:1] "http: TLS handshake error from 192.168.126.11:53744: no serving certificate available for the kubelet" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.101732 5125 patch_prober.go:28] interesting pod/machine-config-daemon-slhjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.102320 5125 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" podUID="d8cea827-b8e3-4d92-adea-df0afd2397da" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.102368 5125 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.103003 5125 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a86a0816bac7ca3fa402c6544237e9e92be21df715faf34c0d65ab20b3280854"} pod="openshift-machine-config-operator/machine-config-daemon-slhjr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.103058 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" podUID="d8cea827-b8e3-4d92-adea-df0afd2397da" containerName="machine-config-daemon" containerID="cri-o://a86a0816bac7ca3fa402c6544237e9e92be21df715faf34c0d65ab20b3280854" gracePeriod=600 Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.199852 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-8pnd7"] Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.200475 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-8pnd7" podUID="840b1c0b-8303-40bb-a881-8a974ea23710" containerName="controller-manager" containerID="cri-o://a91afdad36df325d6f4d1fd5450965f5cc07adf21d37118c50ac52b0143bd097" gracePeriod=30 Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.242067 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v"] Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.242322 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v" podUID="1872a46a-0e1f-469d-b403-8a1e0805d291" containerName="route-controller-manager" containerID="cri-o://3f55efd52ee79979c5783b52c59de168693467ffeb12975c2ed4136ae6015879" gracePeriod=30 Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.401272 5125 generic.go:358] "Generic (PLEG): container finished" podID="840b1c0b-8303-40bb-a881-8a974ea23710" containerID="a91afdad36df325d6f4d1fd5450965f5cc07adf21d37118c50ac52b0143bd097" exitCode=0 Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.401678 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-8pnd7" event={"ID":"840b1c0b-8303-40bb-a881-8a974ea23710","Type":"ContainerDied","Data":"a91afdad36df325d6f4d1fd5450965f5cc07adf21d37118c50ac52b0143bd097"} Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.403442 5125 generic.go:358] "Generic (PLEG): container finished" podID="1872a46a-0e1f-469d-b403-8a1e0805d291" containerID="3f55efd52ee79979c5783b52c59de168693467ffeb12975c2ed4136ae6015879" exitCode=0 Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.403576 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v" event={"ID":"1872a46a-0e1f-469d-b403-8a1e0805d291","Type":"ContainerDied","Data":"3f55efd52ee79979c5783b52c59de168693467ffeb12975c2ed4136ae6015879"} Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.405910 5125 generic.go:358] "Generic (PLEG): container finished" podID="d8cea827-b8e3-4d92-adea-df0afd2397da" containerID="a86a0816bac7ca3fa402c6544237e9e92be21df715faf34c0d65ab20b3280854" exitCode=0 Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.406002 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" event={"ID":"d8cea827-b8e3-4d92-adea-df0afd2397da","Type":"ContainerDied","Data":"a86a0816bac7ca3fa402c6544237e9e92be21df715faf34c0d65ab20b3280854"} Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.610066 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-8pnd7" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.616280 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.641300 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6d7d965b7d-g54vv"] Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.641991 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="250d3433-c9c9-4cc2-b0ff-fae4f22615b3" containerName="registry-server" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642016 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="250d3433-c9c9-4cc2-b0ff-fae4f22615b3" containerName="registry-server" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642030 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="edf1ad5e-15fa-4885-be31-4124514570a1" containerName="extract-utilities" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642037 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="edf1ad5e-15fa-4885-be31-4124514570a1" containerName="extract-utilities" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642048 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="84e9ab89-5847-44a9-b4d5-11fd35eea65f" containerName="extract-content" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642057 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e9ab89-5847-44a9-b4d5-11fd35eea65f" containerName="extract-content" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642067 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="84e9ab89-5847-44a9-b4d5-11fd35eea65f" containerName="registry-server" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642075 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e9ab89-5847-44a9-b4d5-11fd35eea65f" containerName="registry-server" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642086 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="edf1ad5e-15fa-4885-be31-4124514570a1" containerName="registry-server" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642094 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="edf1ad5e-15fa-4885-be31-4124514570a1" containerName="registry-server" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642105 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9e9aba28-961e-4643-92d8-d718748862c6" containerName="extract-content" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642113 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e9aba28-961e-4643-92d8-d718748862c6" containerName="extract-content" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642128 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="250d3433-c9c9-4cc2-b0ff-fae4f22615b3" containerName="extract-utilities" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642134 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="250d3433-c9c9-4cc2-b0ff-fae4f22615b3" containerName="extract-utilities" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642146 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="84e9ab89-5847-44a9-b4d5-11fd35eea65f" containerName="extract-utilities" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642153 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e9ab89-5847-44a9-b4d5-11fd35eea65f" containerName="extract-utilities" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642170 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1872a46a-0e1f-469d-b403-8a1e0805d291" containerName="route-controller-manager" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642177 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="1872a46a-0e1f-469d-b403-8a1e0805d291" containerName="route-controller-manager" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642187 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="840b1c0b-8303-40bb-a881-8a974ea23710" containerName="controller-manager" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642193 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="840b1c0b-8303-40bb-a881-8a974ea23710" containerName="controller-manager" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642202 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="edf1ad5e-15fa-4885-be31-4124514570a1" containerName="extract-content" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642210 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="edf1ad5e-15fa-4885-be31-4124514570a1" containerName="extract-content" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642220 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9e9aba28-961e-4643-92d8-d718748862c6" containerName="registry-server" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642227 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e9aba28-961e-4643-92d8-d718748862c6" containerName="registry-server" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642240 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9e9aba28-961e-4643-92d8-d718748862c6" containerName="extract-utilities" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642247 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e9aba28-961e-4643-92d8-d718748862c6" containerName="extract-utilities" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642258 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="77083b49-6a76-42e1-9f35-4b34306c23d3" containerName="marketplace-operator" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642266 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="77083b49-6a76-42e1-9f35-4b34306c23d3" containerName="marketplace-operator" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642275 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="250d3433-c9c9-4cc2-b0ff-fae4f22615b3" containerName="extract-content" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642282 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="250d3433-c9c9-4cc2-b0ff-fae4f22615b3" containerName="extract-content" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642399 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="1872a46a-0e1f-469d-b403-8a1e0805d291" containerName="route-controller-manager" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642412 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="edf1ad5e-15fa-4885-be31-4124514570a1" containerName="registry-server" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642423 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="250d3433-c9c9-4cc2-b0ff-fae4f22615b3" containerName="registry-server" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642433 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="77083b49-6a76-42e1-9f35-4b34306c23d3" containerName="marketplace-operator" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642446 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="840b1c0b-8303-40bb-a881-8a974ea23710" containerName="controller-manager" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642459 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="84e9ab89-5847-44a9-b4d5-11fd35eea65f" containerName="registry-server" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.642468 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="9e9aba28-961e-4643-92d8-d718748862c6" containerName="registry-server" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.651433 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d7d965b7d-g54vv" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.663007 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6d7d965b7d-g54vv"] Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.677275 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77d9c67c45-f6thn"] Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.688877 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1872a46a-0e1f-469d-b403-8a1e0805d291-serving-cert\") pod \"1872a46a-0e1f-469d-b403-8a1e0805d291\" (UID: \"1872a46a-0e1f-469d-b403-8a1e0805d291\") " Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.688960 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfw8n\" (UniqueName: \"kubernetes.io/projected/840b1c0b-8303-40bb-a881-8a974ea23710-kube-api-access-lfw8n\") pod \"840b1c0b-8303-40bb-a881-8a974ea23710\" (UID: \"840b1c0b-8303-40bb-a881-8a974ea23710\") " Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.689020 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/840b1c0b-8303-40bb-a881-8a974ea23710-client-ca\") pod \"840b1c0b-8303-40bb-a881-8a974ea23710\" (UID: \"840b1c0b-8303-40bb-a881-8a974ea23710\") " Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.689051 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/840b1c0b-8303-40bb-a881-8a974ea23710-proxy-ca-bundles\") pod \"840b1c0b-8303-40bb-a881-8a974ea23710\" (UID: \"840b1c0b-8303-40bb-a881-8a974ea23710\") " Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.689114 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/840b1c0b-8303-40bb-a881-8a974ea23710-config\") pod \"840b1c0b-8303-40bb-a881-8a974ea23710\" (UID: \"840b1c0b-8303-40bb-a881-8a974ea23710\") " Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.689148 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1872a46a-0e1f-469d-b403-8a1e0805d291-client-ca\") pod \"1872a46a-0e1f-469d-b403-8a1e0805d291\" (UID: \"1872a46a-0e1f-469d-b403-8a1e0805d291\") " Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.689238 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/840b1c0b-8303-40bb-a881-8a974ea23710-tmp\") pod \"840b1c0b-8303-40bb-a881-8a974ea23710\" (UID: \"840b1c0b-8303-40bb-a881-8a974ea23710\") " Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.689280 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/840b1c0b-8303-40bb-a881-8a974ea23710-serving-cert\") pod \"840b1c0b-8303-40bb-a881-8a974ea23710\" (UID: \"840b1c0b-8303-40bb-a881-8a974ea23710\") " Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.689308 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1872a46a-0e1f-469d-b403-8a1e0805d291-tmp\") pod \"1872a46a-0e1f-469d-b403-8a1e0805d291\" (UID: \"1872a46a-0e1f-469d-b403-8a1e0805d291\") " Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.689374 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1872a46a-0e1f-469d-b403-8a1e0805d291-config\") pod \"1872a46a-0e1f-469d-b403-8a1e0805d291\" (UID: \"1872a46a-0e1f-469d-b403-8a1e0805d291\") " Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.689432 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4krl\" (UniqueName: \"kubernetes.io/projected/1872a46a-0e1f-469d-b403-8a1e0805d291-kube-api-access-t4krl\") pod \"1872a46a-0e1f-469d-b403-8a1e0805d291\" (UID: \"1872a46a-0e1f-469d-b403-8a1e0805d291\") " Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.690274 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/840b1c0b-8303-40bb-a881-8a974ea23710-tmp" (OuterVolumeSpecName: "tmp") pod "840b1c0b-8303-40bb-a881-8a974ea23710" (UID: "840b1c0b-8303-40bb-a881-8a974ea23710"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.691010 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1872a46a-0e1f-469d-b403-8a1e0805d291-config" (OuterVolumeSpecName: "config") pod "1872a46a-0e1f-469d-b403-8a1e0805d291" (UID: "1872a46a-0e1f-469d-b403-8a1e0805d291"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.691212 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1872a46a-0e1f-469d-b403-8a1e0805d291-client-ca" (OuterVolumeSpecName: "client-ca") pod "1872a46a-0e1f-469d-b403-8a1e0805d291" (UID: "1872a46a-0e1f-469d-b403-8a1e0805d291"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.691222 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/840b1c0b-8303-40bb-a881-8a974ea23710-client-ca" (OuterVolumeSpecName: "client-ca") pod "840b1c0b-8303-40bb-a881-8a974ea23710" (UID: "840b1c0b-8303-40bb-a881-8a974ea23710"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.691343 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/840b1c0b-8303-40bb-a881-8a974ea23710-config" (OuterVolumeSpecName: "config") pod "840b1c0b-8303-40bb-a881-8a974ea23710" (UID: "840b1c0b-8303-40bb-a881-8a974ea23710"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.691407 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/840b1c0b-8303-40bb-a881-8a974ea23710-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "840b1c0b-8303-40bb-a881-8a974ea23710" (UID: "840b1c0b-8303-40bb-a881-8a974ea23710"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.691890 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1872a46a-0e1f-469d-b403-8a1e0805d291-tmp" (OuterVolumeSpecName: "tmp") pod "1872a46a-0e1f-469d-b403-8a1e0805d291" (UID: "1872a46a-0e1f-469d-b403-8a1e0805d291"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.697013 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/840b1c0b-8303-40bb-a881-8a974ea23710-kube-api-access-lfw8n" (OuterVolumeSpecName: "kube-api-access-lfw8n") pod "840b1c0b-8303-40bb-a881-8a974ea23710" (UID: "840b1c0b-8303-40bb-a881-8a974ea23710"). InnerVolumeSpecName "kube-api-access-lfw8n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.697912 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1872a46a-0e1f-469d-b403-8a1e0805d291-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1872a46a-0e1f-469d-b403-8a1e0805d291" (UID: "1872a46a-0e1f-469d-b403-8a1e0805d291"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.698710 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1872a46a-0e1f-469d-b403-8a1e0805d291-kube-api-access-t4krl" (OuterVolumeSpecName: "kube-api-access-t4krl") pod "1872a46a-0e1f-469d-b403-8a1e0805d291" (UID: "1872a46a-0e1f-469d-b403-8a1e0805d291"). InnerVolumeSpecName "kube-api-access-t4krl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.698733 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77d9c67c45-f6thn" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.703063 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/840b1c0b-8303-40bb-a881-8a974ea23710-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "840b1c0b-8303-40bb-a881-8a974ea23710" (UID: "840b1c0b-8303-40bb-a881-8a974ea23710"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.706456 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77d9c67c45-f6thn"] Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.790873 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lg4k\" (UniqueName: \"kubernetes.io/projected/976ed779-8691-4eee-8d33-2f21b6edbb35-kube-api-access-7lg4k\") pod \"route-controller-manager-77d9c67c45-f6thn\" (UID: \"976ed779-8691-4eee-8d33-2f21b6edbb35\") " pod="openshift-route-controller-manager/route-controller-manager-77d9c67c45-f6thn" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.791491 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/976ed779-8691-4eee-8d33-2f21b6edbb35-tmp\") pod \"route-controller-manager-77d9c67c45-f6thn\" (UID: \"976ed779-8691-4eee-8d33-2f21b6edbb35\") " pod="openshift-route-controller-manager/route-controller-manager-77d9c67c45-f6thn" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.791525 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwwpc\" (UniqueName: \"kubernetes.io/projected/2a13825a-60fd-423c-a33a-f7311f00e0df-kube-api-access-xwwpc\") pod \"controller-manager-6d7d965b7d-g54vv\" (UID: \"2a13825a-60fd-423c-a33a-f7311f00e0df\") " pod="openshift-controller-manager/controller-manager-6d7d965b7d-g54vv" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.791759 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a13825a-60fd-423c-a33a-f7311f00e0df-config\") pod \"controller-manager-6d7d965b7d-g54vv\" (UID: \"2a13825a-60fd-423c-a33a-f7311f00e0df\") " pod="openshift-controller-manager/controller-manager-6d7d965b7d-g54vv" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.791859 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a13825a-60fd-423c-a33a-f7311f00e0df-proxy-ca-bundles\") pod \"controller-manager-6d7d965b7d-g54vv\" (UID: \"2a13825a-60fd-423c-a33a-f7311f00e0df\") " pod="openshift-controller-manager/controller-manager-6d7d965b7d-g54vv" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.791951 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a13825a-60fd-423c-a33a-f7311f00e0df-serving-cert\") pod \"controller-manager-6d7d965b7d-g54vv\" (UID: \"2a13825a-60fd-423c-a33a-f7311f00e0df\") " pod="openshift-controller-manager/controller-manager-6d7d965b7d-g54vv" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.792046 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/976ed779-8691-4eee-8d33-2f21b6edbb35-client-ca\") pod \"route-controller-manager-77d9c67c45-f6thn\" (UID: \"976ed779-8691-4eee-8d33-2f21b6edbb35\") " pod="openshift-route-controller-manager/route-controller-manager-77d9c67c45-f6thn" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.792104 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2a13825a-60fd-423c-a33a-f7311f00e0df-tmp\") pod \"controller-manager-6d7d965b7d-g54vv\" (UID: \"2a13825a-60fd-423c-a33a-f7311f00e0df\") " pod="openshift-controller-manager/controller-manager-6d7d965b7d-g54vv" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.792176 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/976ed779-8691-4eee-8d33-2f21b6edbb35-config\") pod \"route-controller-manager-77d9c67c45-f6thn\" (UID: \"976ed779-8691-4eee-8d33-2f21b6edbb35\") " pod="openshift-route-controller-manager/route-controller-manager-77d9c67c45-f6thn" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.792263 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/976ed779-8691-4eee-8d33-2f21b6edbb35-serving-cert\") pod \"route-controller-manager-77d9c67c45-f6thn\" (UID: \"976ed779-8691-4eee-8d33-2f21b6edbb35\") " pod="openshift-route-controller-manager/route-controller-manager-77d9c67c45-f6thn" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.792401 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a13825a-60fd-423c-a33a-f7311f00e0df-client-ca\") pod \"controller-manager-6d7d965b7d-g54vv\" (UID: \"2a13825a-60fd-423c-a33a-f7311f00e0df\") " pod="openshift-controller-manager/controller-manager-6d7d965b7d-g54vv" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.792495 5125 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/840b1c0b-8303-40bb-a881-8a974ea23710-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.792525 5125 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1872a46a-0e1f-469d-b403-8a1e0805d291-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.792544 5125 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/840b1c0b-8303-40bb-a881-8a974ea23710-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.792557 5125 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/840b1c0b-8303-40bb-a881-8a974ea23710-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.792570 5125 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1872a46a-0e1f-469d-b403-8a1e0805d291-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.792584 5125 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1872a46a-0e1f-469d-b403-8a1e0805d291-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.792597 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t4krl\" (UniqueName: \"kubernetes.io/projected/1872a46a-0e1f-469d-b403-8a1e0805d291-kube-api-access-t4krl\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.792624 5125 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1872a46a-0e1f-469d-b403-8a1e0805d291-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.792637 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lfw8n\" (UniqueName: \"kubernetes.io/projected/840b1c0b-8303-40bb-a881-8a974ea23710-kube-api-access-lfw8n\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.792650 5125 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/840b1c0b-8303-40bb-a881-8a974ea23710-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.792662 5125 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/840b1c0b-8303-40bb-a881-8a974ea23710-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.895344 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/976ed779-8691-4eee-8d33-2f21b6edbb35-tmp\") pod \"route-controller-manager-77d9c67c45-f6thn\" (UID: \"976ed779-8691-4eee-8d33-2f21b6edbb35\") " pod="openshift-route-controller-manager/route-controller-manager-77d9c67c45-f6thn" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.895403 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xwwpc\" (UniqueName: \"kubernetes.io/projected/2a13825a-60fd-423c-a33a-f7311f00e0df-kube-api-access-xwwpc\") pod \"controller-manager-6d7d965b7d-g54vv\" (UID: \"2a13825a-60fd-423c-a33a-f7311f00e0df\") " pod="openshift-controller-manager/controller-manager-6d7d965b7d-g54vv" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.895441 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a13825a-60fd-423c-a33a-f7311f00e0df-config\") pod \"controller-manager-6d7d965b7d-g54vv\" (UID: \"2a13825a-60fd-423c-a33a-f7311f00e0df\") " pod="openshift-controller-manager/controller-manager-6d7d965b7d-g54vv" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.895463 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a13825a-60fd-423c-a33a-f7311f00e0df-proxy-ca-bundles\") pod \"controller-manager-6d7d965b7d-g54vv\" (UID: \"2a13825a-60fd-423c-a33a-f7311f00e0df\") " pod="openshift-controller-manager/controller-manager-6d7d965b7d-g54vv" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.895484 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a13825a-60fd-423c-a33a-f7311f00e0df-serving-cert\") pod \"controller-manager-6d7d965b7d-g54vv\" (UID: \"2a13825a-60fd-423c-a33a-f7311f00e0df\") " pod="openshift-controller-manager/controller-manager-6d7d965b7d-g54vv" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.895739 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/976ed779-8691-4eee-8d33-2f21b6edbb35-client-ca\") pod \"route-controller-manager-77d9c67c45-f6thn\" (UID: \"976ed779-8691-4eee-8d33-2f21b6edbb35\") " pod="openshift-route-controller-manager/route-controller-manager-77d9c67c45-f6thn" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.895864 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2a13825a-60fd-423c-a33a-f7311f00e0df-tmp\") pod \"controller-manager-6d7d965b7d-g54vv\" (UID: \"2a13825a-60fd-423c-a33a-f7311f00e0df\") " pod="openshift-controller-manager/controller-manager-6d7d965b7d-g54vv" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.895889 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/976ed779-8691-4eee-8d33-2f21b6edbb35-tmp\") pod \"route-controller-manager-77d9c67c45-f6thn\" (UID: \"976ed779-8691-4eee-8d33-2f21b6edbb35\") " pod="openshift-route-controller-manager/route-controller-manager-77d9c67c45-f6thn" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.895898 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/976ed779-8691-4eee-8d33-2f21b6edbb35-config\") pod \"route-controller-manager-77d9c67c45-f6thn\" (UID: \"976ed779-8691-4eee-8d33-2f21b6edbb35\") " pod="openshift-route-controller-manager/route-controller-manager-77d9c67c45-f6thn" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.896956 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2a13825a-60fd-423c-a33a-f7311f00e0df-tmp\") pod \"controller-manager-6d7d965b7d-g54vv\" (UID: \"2a13825a-60fd-423c-a33a-f7311f00e0df\") " pod="openshift-controller-manager/controller-manager-6d7d965b7d-g54vv" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.897147 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/976ed779-8691-4eee-8d33-2f21b6edbb35-serving-cert\") pod \"route-controller-manager-77d9c67c45-f6thn\" (UID: \"976ed779-8691-4eee-8d33-2f21b6edbb35\") " pod="openshift-route-controller-manager/route-controller-manager-77d9c67c45-f6thn" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.897237 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a13825a-60fd-423c-a33a-f7311f00e0df-client-ca\") pod \"controller-manager-6d7d965b7d-g54vv\" (UID: \"2a13825a-60fd-423c-a33a-f7311f00e0df\") " pod="openshift-controller-manager/controller-manager-6d7d965b7d-g54vv" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.897286 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7lg4k\" (UniqueName: \"kubernetes.io/projected/976ed779-8691-4eee-8d33-2f21b6edbb35-kube-api-access-7lg4k\") pod \"route-controller-manager-77d9c67c45-f6thn\" (UID: \"976ed779-8691-4eee-8d33-2f21b6edbb35\") " pod="openshift-route-controller-manager/route-controller-manager-77d9c67c45-f6thn" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.897640 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a13825a-60fd-423c-a33a-f7311f00e0df-config\") pod \"controller-manager-6d7d965b7d-g54vv\" (UID: \"2a13825a-60fd-423c-a33a-f7311f00e0df\") " pod="openshift-controller-manager/controller-manager-6d7d965b7d-g54vv" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.898010 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/976ed779-8691-4eee-8d33-2f21b6edbb35-client-ca\") pod \"route-controller-manager-77d9c67c45-f6thn\" (UID: \"976ed779-8691-4eee-8d33-2f21b6edbb35\") " pod="openshift-route-controller-manager/route-controller-manager-77d9c67c45-f6thn" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.898235 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a13825a-60fd-423c-a33a-f7311f00e0df-client-ca\") pod \"controller-manager-6d7d965b7d-g54vv\" (UID: \"2a13825a-60fd-423c-a33a-f7311f00e0df\") " pod="openshift-controller-manager/controller-manager-6d7d965b7d-g54vv" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.899316 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/976ed779-8691-4eee-8d33-2f21b6edbb35-config\") pod \"route-controller-manager-77d9c67c45-f6thn\" (UID: \"976ed779-8691-4eee-8d33-2f21b6edbb35\") " pod="openshift-route-controller-manager/route-controller-manager-77d9c67c45-f6thn" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.901155 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a13825a-60fd-423c-a33a-f7311f00e0df-serving-cert\") pod \"controller-manager-6d7d965b7d-g54vv\" (UID: \"2a13825a-60fd-423c-a33a-f7311f00e0df\") " pod="openshift-controller-manager/controller-manager-6d7d965b7d-g54vv" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.913757 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a13825a-60fd-423c-a33a-f7311f00e0df-proxy-ca-bundles\") pod \"controller-manager-6d7d965b7d-g54vv\" (UID: \"2a13825a-60fd-423c-a33a-f7311f00e0df\") " pod="openshift-controller-manager/controller-manager-6d7d965b7d-g54vv" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.916169 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/976ed779-8691-4eee-8d33-2f21b6edbb35-serving-cert\") pod \"route-controller-manager-77d9c67c45-f6thn\" (UID: \"976ed779-8691-4eee-8d33-2f21b6edbb35\") " pod="openshift-route-controller-manager/route-controller-manager-77d9c67c45-f6thn" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.917804 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwwpc\" (UniqueName: \"kubernetes.io/projected/2a13825a-60fd-423c-a33a-f7311f00e0df-kube-api-access-xwwpc\") pod \"controller-manager-6d7d965b7d-g54vv\" (UID: \"2a13825a-60fd-423c-a33a-f7311f00e0df\") " pod="openshift-controller-manager/controller-manager-6d7d965b7d-g54vv" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.920181 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lg4k\" (UniqueName: \"kubernetes.io/projected/976ed779-8691-4eee-8d33-2f21b6edbb35-kube-api-access-7lg4k\") pod \"route-controller-manager-77d9c67c45-f6thn\" (UID: \"976ed779-8691-4eee-8d33-2f21b6edbb35\") " pod="openshift-route-controller-manager/route-controller-manager-77d9c67c45-f6thn" Dec 08 19:33:51 crc kubenswrapper[5125]: I1208 19:33:51.967920 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d7d965b7d-g54vv" Dec 08 19:33:52 crc kubenswrapper[5125]: I1208 19:33:52.026798 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77d9c67c45-f6thn" Dec 08 19:33:52 crc kubenswrapper[5125]: I1208 19:33:52.170072 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6d7d965b7d-g54vv"] Dec 08 19:33:52 crc kubenswrapper[5125]: I1208 19:33:52.249193 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77d9c67c45-f6thn"] Dec 08 19:33:52 crc kubenswrapper[5125]: I1208 19:33:52.414037 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-8pnd7" event={"ID":"840b1c0b-8303-40bb-a881-8a974ea23710","Type":"ContainerDied","Data":"dcff60cad2ac06a50c75438297eea55420905c4e3e547dbf70d5be6064a27f4a"} Dec 08 19:33:52 crc kubenswrapper[5125]: I1208 19:33:52.414106 5125 scope.go:117] "RemoveContainer" containerID="a91afdad36df325d6f4d1fd5450965f5cc07adf21d37118c50ac52b0143bd097" Dec 08 19:33:52 crc kubenswrapper[5125]: I1208 19:33:52.414707 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-8pnd7" Dec 08 19:33:52 crc kubenswrapper[5125]: I1208 19:33:52.416214 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77d9c67c45-f6thn" event={"ID":"976ed779-8691-4eee-8d33-2f21b6edbb35","Type":"ContainerStarted","Data":"df1ebc50d57283e8369a1004efed5332e894cda4f93ac035be09a637efee836c"} Dec 08 19:33:52 crc kubenswrapper[5125]: I1208 19:33:52.416247 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77d9c67c45-f6thn" event={"ID":"976ed779-8691-4eee-8d33-2f21b6edbb35","Type":"ContainerStarted","Data":"3d74fb5abde38d50314f59f641e53b0449c192135510e0834a6d76939f311034"} Dec 08 19:33:52 crc kubenswrapper[5125]: I1208 19:33:52.417681 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-77d9c67c45-f6thn" Dec 08 19:33:52 crc kubenswrapper[5125]: I1208 19:33:52.419438 5125 patch_prober.go:28] interesting pod/route-controller-manager-77d9c67c45-f6thn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Dec 08 19:33:52 crc kubenswrapper[5125]: I1208 19:33:52.419503 5125 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-77d9c67c45-f6thn" podUID="976ed779-8691-4eee-8d33-2f21b6edbb35" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Dec 08 19:33:52 crc kubenswrapper[5125]: I1208 19:33:52.422030 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v" Dec 08 19:33:52 crc kubenswrapper[5125]: I1208 19:33:52.422470 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v" event={"ID":"1872a46a-0e1f-469d-b403-8a1e0805d291","Type":"ContainerDied","Data":"d41c7094337302c1a1d94ec77faa9764ac41bbbdfb78f24b8dd72ecee6faefb4"} Dec 08 19:33:52 crc kubenswrapper[5125]: I1208 19:33:52.427914 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" event={"ID":"d8cea827-b8e3-4d92-adea-df0afd2397da","Type":"ContainerStarted","Data":"47c3c7b274e1f8fb2e42d6843b6c70142b9720f62299f0a9859e9a777dd9f1a9"} Dec 08 19:33:52 crc kubenswrapper[5125]: I1208 19:33:52.432491 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d7d965b7d-g54vv" event={"ID":"2a13825a-60fd-423c-a33a-f7311f00e0df","Type":"ContainerStarted","Data":"2653901f675d07ae1f0ec9898a20befe8105c9efbe82a7b11eb82b3c8bc42785"} Dec 08 19:33:52 crc kubenswrapper[5125]: I1208 19:33:52.432566 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d7d965b7d-g54vv" event={"ID":"2a13825a-60fd-423c-a33a-f7311f00e0df","Type":"ContainerStarted","Data":"35e148f98bb7c0cb611a5c407b41e8ed6b13052f97a1c32cb55793989a2bf2ca"} Dec 08 19:33:52 crc kubenswrapper[5125]: I1208 19:33:52.434288 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-6d7d965b7d-g54vv" Dec 08 19:33:52 crc kubenswrapper[5125]: I1208 19:33:52.436036 5125 patch_prober.go:28] interesting pod/controller-manager-6d7d965b7d-g54vv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Dec 08 19:33:52 crc kubenswrapper[5125]: I1208 19:33:52.436245 5125 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6d7d965b7d-g54vv" podUID="2a13825a-60fd-423c-a33a-f7311f00e0df" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Dec 08 19:33:52 crc kubenswrapper[5125]: I1208 19:33:52.443388 5125 scope.go:117] "RemoveContainer" containerID="3f55efd52ee79979c5783b52c59de168693467ffeb12975c2ed4136ae6015879" Dec 08 19:33:52 crc kubenswrapper[5125]: I1208 19:33:52.447015 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-8pnd7"] Dec 08 19:33:52 crc kubenswrapper[5125]: I1208 19:33:52.455288 5125 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-8pnd7"] Dec 08 19:33:52 crc kubenswrapper[5125]: I1208 19:33:52.458948 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-77d9c67c45-f6thn" podStartSLOduration=1.458921004 podStartE2EDuration="1.458921004s" podCreationTimestamp="2025-12-08 19:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:33:52.449049795 +0000 UTC m=+289.219540079" watchObservedRunningTime="2025-12-08 19:33:52.458921004 +0000 UTC m=+289.229411278" Dec 08 19:33:52 crc kubenswrapper[5125]: I1208 19:33:52.470120 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6d7d965b7d-g54vv" podStartSLOduration=1.470091759 podStartE2EDuration="1.470091759s" podCreationTimestamp="2025-12-08 19:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:33:52.465432582 +0000 UTC m=+289.235922866" watchObservedRunningTime="2025-12-08 19:33:52.470091759 +0000 UTC m=+289.240582023" Dec 08 19:33:52 crc kubenswrapper[5125]: I1208 19:33:52.493175 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v"] Dec 08 19:33:52 crc kubenswrapper[5125]: I1208 19:33:52.497569 5125 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-lrh8v"] Dec 08 19:33:53 crc kubenswrapper[5125]: I1208 19:33:53.380625 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6d7d965b7d-g54vv"] Dec 08 19:33:53 crc kubenswrapper[5125]: I1208 19:33:53.443127 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6d7d965b7d-g54vv" Dec 08 19:33:53 crc kubenswrapper[5125]: I1208 19:33:53.444841 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-77d9c67c45-f6thn" Dec 08 19:33:53 crc kubenswrapper[5125]: I1208 19:33:53.778983 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1872a46a-0e1f-469d-b403-8a1e0805d291" path="/var/lib/kubelet/pods/1872a46a-0e1f-469d-b403-8a1e0805d291/volumes" Dec 08 19:33:53 crc kubenswrapper[5125]: I1208 19:33:53.780914 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="840b1c0b-8303-40bb-a881-8a974ea23710" path="/var/lib/kubelet/pods/840b1c0b-8303-40bb-a881-8a974ea23710/volumes" Dec 08 19:33:54 crc kubenswrapper[5125]: I1208 19:33:54.445732 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6d7d965b7d-g54vv" podUID="2a13825a-60fd-423c-a33a-f7311f00e0df" containerName="controller-manager" containerID="cri-o://2653901f675d07ae1f0ec9898a20befe8105c9efbe82a7b11eb82b3c8bc42785" gracePeriod=30 Dec 08 19:33:54 crc kubenswrapper[5125]: I1208 19:33:54.736542 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d7d965b7d-g54vv" Dec 08 19:33:54 crc kubenswrapper[5125]: I1208 19:33:54.764046 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8544797967-llktn"] Dec 08 19:33:54 crc kubenswrapper[5125]: I1208 19:33:54.764594 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2a13825a-60fd-423c-a33a-f7311f00e0df" containerName="controller-manager" Dec 08 19:33:54 crc kubenswrapper[5125]: I1208 19:33:54.764655 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a13825a-60fd-423c-a33a-f7311f00e0df" containerName="controller-manager" Dec 08 19:33:54 crc kubenswrapper[5125]: I1208 19:33:54.764855 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="2a13825a-60fd-423c-a33a-f7311f00e0df" containerName="controller-manager" Dec 08 19:33:54 crc kubenswrapper[5125]: I1208 19:33:54.768799 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8544797967-llktn" Dec 08 19:33:54 crc kubenswrapper[5125]: I1208 19:33:54.780021 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8544797967-llktn"] Dec 08 19:33:54 crc kubenswrapper[5125]: I1208 19:33:54.832992 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a13825a-60fd-423c-a33a-f7311f00e0df-config\") pod \"2a13825a-60fd-423c-a33a-f7311f00e0df\" (UID: \"2a13825a-60fd-423c-a33a-f7311f00e0df\") " Dec 08 19:33:54 crc kubenswrapper[5125]: I1208 19:33:54.833322 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a13825a-60fd-423c-a33a-f7311f00e0df-serving-cert\") pod \"2a13825a-60fd-423c-a33a-f7311f00e0df\" (UID: \"2a13825a-60fd-423c-a33a-f7311f00e0df\") " Dec 08 19:33:54 crc kubenswrapper[5125]: I1208 19:33:54.833453 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwwpc\" (UniqueName: \"kubernetes.io/projected/2a13825a-60fd-423c-a33a-f7311f00e0df-kube-api-access-xwwpc\") pod \"2a13825a-60fd-423c-a33a-f7311f00e0df\" (UID: \"2a13825a-60fd-423c-a33a-f7311f00e0df\") " Dec 08 19:33:54 crc kubenswrapper[5125]: I1208 19:33:54.833555 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a13825a-60fd-423c-a33a-f7311f00e0df-client-ca\") pod \"2a13825a-60fd-423c-a33a-f7311f00e0df\" (UID: \"2a13825a-60fd-423c-a33a-f7311f00e0df\") " Dec 08 19:33:54 crc kubenswrapper[5125]: I1208 19:33:54.833663 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2a13825a-60fd-423c-a33a-f7311f00e0df-tmp\") pod \"2a13825a-60fd-423c-a33a-f7311f00e0df\" (UID: \"2a13825a-60fd-423c-a33a-f7311f00e0df\") " Dec 08 19:33:54 crc kubenswrapper[5125]: I1208 19:33:54.833754 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a13825a-60fd-423c-a33a-f7311f00e0df-proxy-ca-bundles\") pod \"2a13825a-60fd-423c-a33a-f7311f00e0df\" (UID: \"2a13825a-60fd-423c-a33a-f7311f00e0df\") " Dec 08 19:33:54 crc kubenswrapper[5125]: I1208 19:33:54.833585 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a13825a-60fd-423c-a33a-f7311f00e0df-config" (OuterVolumeSpecName: "config") pod "2a13825a-60fd-423c-a33a-f7311f00e0df" (UID: "2a13825a-60fd-423c-a33a-f7311f00e0df"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:33:54 crc kubenswrapper[5125]: I1208 19:33:54.834025 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a13825a-60fd-423c-a33a-f7311f00e0df-tmp" (OuterVolumeSpecName: "tmp") pod "2a13825a-60fd-423c-a33a-f7311f00e0df" (UID: "2a13825a-60fd-423c-a33a-f7311f00e0df"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:33:54 crc kubenswrapper[5125]: I1208 19:33:54.834039 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a13825a-60fd-423c-a33a-f7311f00e0df-client-ca" (OuterVolumeSpecName: "client-ca") pod "2a13825a-60fd-423c-a33a-f7311f00e0df" (UID: "2a13825a-60fd-423c-a33a-f7311f00e0df"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:33:54 crc kubenswrapper[5125]: I1208 19:33:54.834228 5125 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a13825a-60fd-423c-a33a-f7311f00e0df-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:54 crc kubenswrapper[5125]: I1208 19:33:54.834309 5125 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a13825a-60fd-423c-a33a-f7311f00e0df-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:54 crc kubenswrapper[5125]: I1208 19:33:54.834393 5125 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2a13825a-60fd-423c-a33a-f7311f00e0df-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:54 crc kubenswrapper[5125]: I1208 19:33:54.834287 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a13825a-60fd-423c-a33a-f7311f00e0df-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2a13825a-60fd-423c-a33a-f7311f00e0df" (UID: "2a13825a-60fd-423c-a33a-f7311f00e0df"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:33:54 crc kubenswrapper[5125]: I1208 19:33:54.838553 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a13825a-60fd-423c-a33a-f7311f00e0df-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2a13825a-60fd-423c-a33a-f7311f00e0df" (UID: "2a13825a-60fd-423c-a33a-f7311f00e0df"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:33:54 crc kubenswrapper[5125]: I1208 19:33:54.838637 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a13825a-60fd-423c-a33a-f7311f00e0df-kube-api-access-xwwpc" (OuterVolumeSpecName: "kube-api-access-xwwpc") pod "2a13825a-60fd-423c-a33a-f7311f00e0df" (UID: "2a13825a-60fd-423c-a33a-f7311f00e0df"). InnerVolumeSpecName "kube-api-access-xwwpc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:33:54 crc kubenswrapper[5125]: I1208 19:33:54.936079 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53016e3c-f943-4fcb-9ab8-e1456e54275c-config\") pod \"controller-manager-8544797967-llktn\" (UID: \"53016e3c-f943-4fcb-9ab8-e1456e54275c\") " pod="openshift-controller-manager/controller-manager-8544797967-llktn" Dec 08 19:33:54 crc kubenswrapper[5125]: I1208 19:33:54.936197 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/53016e3c-f943-4fcb-9ab8-e1456e54275c-proxy-ca-bundles\") pod \"controller-manager-8544797967-llktn\" (UID: \"53016e3c-f943-4fcb-9ab8-e1456e54275c\") " pod="openshift-controller-manager/controller-manager-8544797967-llktn" Dec 08 19:33:54 crc kubenswrapper[5125]: I1208 19:33:54.936446 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/53016e3c-f943-4fcb-9ab8-e1456e54275c-tmp\") pod \"controller-manager-8544797967-llktn\" (UID: \"53016e3c-f943-4fcb-9ab8-e1456e54275c\") " pod="openshift-controller-manager/controller-manager-8544797967-llktn" Dec 08 19:33:54 crc kubenswrapper[5125]: I1208 19:33:54.936556 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53016e3c-f943-4fcb-9ab8-e1456e54275c-client-ca\") pod \"controller-manager-8544797967-llktn\" (UID: \"53016e3c-f943-4fcb-9ab8-e1456e54275c\") " pod="openshift-controller-manager/controller-manager-8544797967-llktn" Dec 08 19:33:54 crc kubenswrapper[5125]: I1208 19:33:54.936596 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wjbp\" (UniqueName: \"kubernetes.io/projected/53016e3c-f943-4fcb-9ab8-e1456e54275c-kube-api-access-8wjbp\") pod \"controller-manager-8544797967-llktn\" (UID: \"53016e3c-f943-4fcb-9ab8-e1456e54275c\") " pod="openshift-controller-manager/controller-manager-8544797967-llktn" Dec 08 19:33:54 crc kubenswrapper[5125]: I1208 19:33:54.936691 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53016e3c-f943-4fcb-9ab8-e1456e54275c-serving-cert\") pod \"controller-manager-8544797967-llktn\" (UID: \"53016e3c-f943-4fcb-9ab8-e1456e54275c\") " pod="openshift-controller-manager/controller-manager-8544797967-llktn" Dec 08 19:33:54 crc kubenswrapper[5125]: I1208 19:33:54.936767 5125 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a13825a-60fd-423c-a33a-f7311f00e0df-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:54 crc kubenswrapper[5125]: I1208 19:33:54.936790 5125 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a13825a-60fd-423c-a33a-f7311f00e0df-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:54 crc kubenswrapper[5125]: I1208 19:33:54.936800 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xwwpc\" (UniqueName: \"kubernetes.io/projected/2a13825a-60fd-423c-a33a-f7311f00e0df-kube-api-access-xwwpc\") on node \"crc\" DevicePath \"\"" Dec 08 19:33:55 crc kubenswrapper[5125]: I1208 19:33:55.038045 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53016e3c-f943-4fcb-9ab8-e1456e54275c-serving-cert\") pod \"controller-manager-8544797967-llktn\" (UID: \"53016e3c-f943-4fcb-9ab8-e1456e54275c\") " pod="openshift-controller-manager/controller-manager-8544797967-llktn" Dec 08 19:33:55 crc kubenswrapper[5125]: I1208 19:33:55.038117 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53016e3c-f943-4fcb-9ab8-e1456e54275c-config\") pod \"controller-manager-8544797967-llktn\" (UID: \"53016e3c-f943-4fcb-9ab8-e1456e54275c\") " pod="openshift-controller-manager/controller-manager-8544797967-llktn" Dec 08 19:33:55 crc kubenswrapper[5125]: I1208 19:33:55.038151 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/53016e3c-f943-4fcb-9ab8-e1456e54275c-proxy-ca-bundles\") pod \"controller-manager-8544797967-llktn\" (UID: \"53016e3c-f943-4fcb-9ab8-e1456e54275c\") " pod="openshift-controller-manager/controller-manager-8544797967-llktn" Dec 08 19:33:55 crc kubenswrapper[5125]: I1208 19:33:55.038188 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/53016e3c-f943-4fcb-9ab8-e1456e54275c-tmp\") pod \"controller-manager-8544797967-llktn\" (UID: \"53016e3c-f943-4fcb-9ab8-e1456e54275c\") " pod="openshift-controller-manager/controller-manager-8544797967-llktn" Dec 08 19:33:55 crc kubenswrapper[5125]: I1208 19:33:55.038236 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53016e3c-f943-4fcb-9ab8-e1456e54275c-client-ca\") pod \"controller-manager-8544797967-llktn\" (UID: \"53016e3c-f943-4fcb-9ab8-e1456e54275c\") " pod="openshift-controller-manager/controller-manager-8544797967-llktn" Dec 08 19:33:55 crc kubenswrapper[5125]: I1208 19:33:55.038265 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8wjbp\" (UniqueName: \"kubernetes.io/projected/53016e3c-f943-4fcb-9ab8-e1456e54275c-kube-api-access-8wjbp\") pod \"controller-manager-8544797967-llktn\" (UID: \"53016e3c-f943-4fcb-9ab8-e1456e54275c\") " pod="openshift-controller-manager/controller-manager-8544797967-llktn" Dec 08 19:33:55 crc kubenswrapper[5125]: I1208 19:33:55.038755 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/53016e3c-f943-4fcb-9ab8-e1456e54275c-tmp\") pod \"controller-manager-8544797967-llktn\" (UID: \"53016e3c-f943-4fcb-9ab8-e1456e54275c\") " pod="openshift-controller-manager/controller-manager-8544797967-llktn" Dec 08 19:33:55 crc kubenswrapper[5125]: I1208 19:33:55.039291 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53016e3c-f943-4fcb-9ab8-e1456e54275c-client-ca\") pod \"controller-manager-8544797967-llktn\" (UID: \"53016e3c-f943-4fcb-9ab8-e1456e54275c\") " pod="openshift-controller-manager/controller-manager-8544797967-llktn" Dec 08 19:33:55 crc kubenswrapper[5125]: I1208 19:33:55.039484 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53016e3c-f943-4fcb-9ab8-e1456e54275c-config\") pod \"controller-manager-8544797967-llktn\" (UID: \"53016e3c-f943-4fcb-9ab8-e1456e54275c\") " pod="openshift-controller-manager/controller-manager-8544797967-llktn" Dec 08 19:33:55 crc kubenswrapper[5125]: I1208 19:33:55.039986 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/53016e3c-f943-4fcb-9ab8-e1456e54275c-proxy-ca-bundles\") pod \"controller-manager-8544797967-llktn\" (UID: \"53016e3c-f943-4fcb-9ab8-e1456e54275c\") " pod="openshift-controller-manager/controller-manager-8544797967-llktn" Dec 08 19:33:55 crc kubenswrapper[5125]: I1208 19:33:55.044537 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53016e3c-f943-4fcb-9ab8-e1456e54275c-serving-cert\") pod \"controller-manager-8544797967-llktn\" (UID: \"53016e3c-f943-4fcb-9ab8-e1456e54275c\") " pod="openshift-controller-manager/controller-manager-8544797967-llktn" Dec 08 19:33:55 crc kubenswrapper[5125]: I1208 19:33:55.055927 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wjbp\" (UniqueName: \"kubernetes.io/projected/53016e3c-f943-4fcb-9ab8-e1456e54275c-kube-api-access-8wjbp\") pod \"controller-manager-8544797967-llktn\" (UID: \"53016e3c-f943-4fcb-9ab8-e1456e54275c\") " pod="openshift-controller-manager/controller-manager-8544797967-llktn" Dec 08 19:33:55 crc kubenswrapper[5125]: I1208 19:33:55.091239 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8544797967-llktn" Dec 08 19:33:55 crc kubenswrapper[5125]: I1208 19:33:55.516017 5125 generic.go:358] "Generic (PLEG): container finished" podID="2a13825a-60fd-423c-a33a-f7311f00e0df" containerID="2653901f675d07ae1f0ec9898a20befe8105c9efbe82a7b11eb82b3c8bc42785" exitCode=0 Dec 08 19:33:55 crc kubenswrapper[5125]: I1208 19:33:55.516099 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d7d965b7d-g54vv" event={"ID":"2a13825a-60fd-423c-a33a-f7311f00e0df","Type":"ContainerDied","Data":"2653901f675d07ae1f0ec9898a20befe8105c9efbe82a7b11eb82b3c8bc42785"} Dec 08 19:33:55 crc kubenswrapper[5125]: I1208 19:33:55.516761 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d7d965b7d-g54vv" event={"ID":"2a13825a-60fd-423c-a33a-f7311f00e0df","Type":"ContainerDied","Data":"35e148f98bb7c0cb611a5c407b41e8ed6b13052f97a1c32cb55793989a2bf2ca"} Dec 08 19:33:55 crc kubenswrapper[5125]: I1208 19:33:55.516796 5125 scope.go:117] "RemoveContainer" containerID="2653901f675d07ae1f0ec9898a20befe8105c9efbe82a7b11eb82b3c8bc42785" Dec 08 19:33:55 crc kubenswrapper[5125]: I1208 19:33:55.516148 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d7d965b7d-g54vv" Dec 08 19:33:55 crc kubenswrapper[5125]: I1208 19:33:55.528535 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8544797967-llktn"] Dec 08 19:33:55 crc kubenswrapper[5125]: W1208 19:33:55.534287 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53016e3c_f943_4fcb_9ab8_e1456e54275c.slice/crio-89be9d27b55d55b5a3e1112cd089401acae313f10a2cfeed258705ec9cefc893 WatchSource:0}: Error finding container 89be9d27b55d55b5a3e1112cd089401acae313f10a2cfeed258705ec9cefc893: Status 404 returned error can't find the container with id 89be9d27b55d55b5a3e1112cd089401acae313f10a2cfeed258705ec9cefc893 Dec 08 19:33:55 crc kubenswrapper[5125]: I1208 19:33:55.541573 5125 scope.go:117] "RemoveContainer" containerID="2653901f675d07ae1f0ec9898a20befe8105c9efbe82a7b11eb82b3c8bc42785" Dec 08 19:33:55 crc kubenswrapper[5125]: E1208 19:33:55.542005 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2653901f675d07ae1f0ec9898a20befe8105c9efbe82a7b11eb82b3c8bc42785\": container with ID starting with 2653901f675d07ae1f0ec9898a20befe8105c9efbe82a7b11eb82b3c8bc42785 not found: ID does not exist" containerID="2653901f675d07ae1f0ec9898a20befe8105c9efbe82a7b11eb82b3c8bc42785" Dec 08 19:33:55 crc kubenswrapper[5125]: I1208 19:33:55.542067 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2653901f675d07ae1f0ec9898a20befe8105c9efbe82a7b11eb82b3c8bc42785"} err="failed to get container status \"2653901f675d07ae1f0ec9898a20befe8105c9efbe82a7b11eb82b3c8bc42785\": rpc error: code = NotFound desc = could not find container \"2653901f675d07ae1f0ec9898a20befe8105c9efbe82a7b11eb82b3c8bc42785\": container with ID starting with 2653901f675d07ae1f0ec9898a20befe8105c9efbe82a7b11eb82b3c8bc42785 not found: ID does not exist" Dec 08 19:33:55 crc kubenswrapper[5125]: I1208 19:33:55.550584 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6d7d965b7d-g54vv"] Dec 08 19:33:55 crc kubenswrapper[5125]: I1208 19:33:55.566272 5125 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6d7d965b7d-g54vv"] Dec 08 19:33:55 crc kubenswrapper[5125]: I1208 19:33:55.773325 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a13825a-60fd-423c-a33a-f7311f00e0df" path="/var/lib/kubelet/pods/2a13825a-60fd-423c-a33a-f7311f00e0df/volumes" Dec 08 19:33:56 crc kubenswrapper[5125]: I1208 19:33:56.524048 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8544797967-llktn" event={"ID":"53016e3c-f943-4fcb-9ab8-e1456e54275c","Type":"ContainerStarted","Data":"8dcb9a8405c796f87f2fc98cccf0855fe615141f2c61d95df6e671add9029072"} Dec 08 19:33:56 crc kubenswrapper[5125]: I1208 19:33:56.524103 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8544797967-llktn" event={"ID":"53016e3c-f943-4fcb-9ab8-e1456e54275c","Type":"ContainerStarted","Data":"89be9d27b55d55b5a3e1112cd089401acae313f10a2cfeed258705ec9cefc893"} Dec 08 19:33:56 crc kubenswrapper[5125]: I1208 19:33:56.524470 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-8544797967-llktn" Dec 08 19:33:56 crc kubenswrapper[5125]: I1208 19:33:56.530832 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8544797967-llktn" Dec 08 19:33:56 crc kubenswrapper[5125]: I1208 19:33:56.541440 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8544797967-llktn" podStartSLOduration=3.541419881 podStartE2EDuration="3.541419881s" podCreationTimestamp="2025-12-08 19:33:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:33:56.537016631 +0000 UTC m=+293.307506925" watchObservedRunningTime="2025-12-08 19:33:56.541419881 +0000 UTC m=+293.311910165" Dec 08 19:33:59 crc kubenswrapper[5125]: I1208 19:33:59.818906 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9gdtq"] Dec 08 19:33:59 crc kubenswrapper[5125]: I1208 19:33:59.828199 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9gdtq" Dec 08 19:33:59 crc kubenswrapper[5125]: I1208 19:33:59.834819 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 08 19:33:59 crc kubenswrapper[5125]: I1208 19:33:59.851770 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9gdtq"] Dec 08 19:34:00 crc kubenswrapper[5125]: I1208 19:34:00.001134 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ec2259e-fa77-483b-b9f7-09d483849e65-utilities\") pod \"certified-operators-9gdtq\" (UID: \"2ec2259e-fa77-483b-b9f7-09d483849e65\") " pod="openshift-marketplace/certified-operators-9gdtq" Dec 08 19:34:00 crc kubenswrapper[5125]: I1208 19:34:00.001332 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcngm\" (UniqueName: \"kubernetes.io/projected/2ec2259e-fa77-483b-b9f7-09d483849e65-kube-api-access-bcngm\") pod \"certified-operators-9gdtq\" (UID: \"2ec2259e-fa77-483b-b9f7-09d483849e65\") " pod="openshift-marketplace/certified-operators-9gdtq" Dec 08 19:34:00 crc kubenswrapper[5125]: I1208 19:34:00.001386 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ec2259e-fa77-483b-b9f7-09d483849e65-catalog-content\") pod \"certified-operators-9gdtq\" (UID: \"2ec2259e-fa77-483b-b9f7-09d483849e65\") " pod="openshift-marketplace/certified-operators-9gdtq" Dec 08 19:34:00 crc kubenswrapper[5125]: I1208 19:34:00.019702 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h5j8f"] Dec 08 19:34:00 crc kubenswrapper[5125]: I1208 19:34:00.023499 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h5j8f" Dec 08 19:34:00 crc kubenswrapper[5125]: I1208 19:34:00.028242 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 08 19:34:00 crc kubenswrapper[5125]: I1208 19:34:00.035909 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h5j8f"] Dec 08 19:34:00 crc kubenswrapper[5125]: I1208 19:34:00.102956 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bcngm\" (UniqueName: \"kubernetes.io/projected/2ec2259e-fa77-483b-b9f7-09d483849e65-kube-api-access-bcngm\") pod \"certified-operators-9gdtq\" (UID: \"2ec2259e-fa77-483b-b9f7-09d483849e65\") " pod="openshift-marketplace/certified-operators-9gdtq" Dec 08 19:34:00 crc kubenswrapper[5125]: I1208 19:34:00.103017 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ec2259e-fa77-483b-b9f7-09d483849e65-catalog-content\") pod \"certified-operators-9gdtq\" (UID: \"2ec2259e-fa77-483b-b9f7-09d483849e65\") " pod="openshift-marketplace/certified-operators-9gdtq" Dec 08 19:34:00 crc kubenswrapper[5125]: I1208 19:34:00.103050 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ec2259e-fa77-483b-b9f7-09d483849e65-utilities\") pod \"certified-operators-9gdtq\" (UID: \"2ec2259e-fa77-483b-b9f7-09d483849e65\") " pod="openshift-marketplace/certified-operators-9gdtq" Dec 08 19:34:00 crc kubenswrapper[5125]: I1208 19:34:00.103559 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ec2259e-fa77-483b-b9f7-09d483849e65-utilities\") pod \"certified-operators-9gdtq\" (UID: \"2ec2259e-fa77-483b-b9f7-09d483849e65\") " pod="openshift-marketplace/certified-operators-9gdtq" Dec 08 19:34:00 crc kubenswrapper[5125]: I1208 19:34:00.103629 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ec2259e-fa77-483b-b9f7-09d483849e65-catalog-content\") pod \"certified-operators-9gdtq\" (UID: \"2ec2259e-fa77-483b-b9f7-09d483849e65\") " pod="openshift-marketplace/certified-operators-9gdtq" Dec 08 19:34:00 crc kubenswrapper[5125]: I1208 19:34:00.122651 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcngm\" (UniqueName: \"kubernetes.io/projected/2ec2259e-fa77-483b-b9f7-09d483849e65-kube-api-access-bcngm\") pod \"certified-operators-9gdtq\" (UID: \"2ec2259e-fa77-483b-b9f7-09d483849e65\") " pod="openshift-marketplace/certified-operators-9gdtq" Dec 08 19:34:00 crc kubenswrapper[5125]: I1208 19:34:00.161306 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9gdtq" Dec 08 19:34:00 crc kubenswrapper[5125]: I1208 19:34:00.203947 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00f605a6-9423-43c8-905c-2b12505dc2fc-catalog-content\") pod \"redhat-operators-h5j8f\" (UID: \"00f605a6-9423-43c8-905c-2b12505dc2fc\") " pod="openshift-marketplace/redhat-operators-h5j8f" Dec 08 19:34:00 crc kubenswrapper[5125]: I1208 19:34:00.203997 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dhn6\" (UniqueName: \"kubernetes.io/projected/00f605a6-9423-43c8-905c-2b12505dc2fc-kube-api-access-7dhn6\") pod \"redhat-operators-h5j8f\" (UID: \"00f605a6-9423-43c8-905c-2b12505dc2fc\") " pod="openshift-marketplace/redhat-operators-h5j8f" Dec 08 19:34:00 crc kubenswrapper[5125]: I1208 19:34:00.204039 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00f605a6-9423-43c8-905c-2b12505dc2fc-utilities\") pod \"redhat-operators-h5j8f\" (UID: \"00f605a6-9423-43c8-905c-2b12505dc2fc\") " pod="openshift-marketplace/redhat-operators-h5j8f" Dec 08 19:34:00 crc kubenswrapper[5125]: I1208 19:34:00.305443 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00f605a6-9423-43c8-905c-2b12505dc2fc-catalog-content\") pod \"redhat-operators-h5j8f\" (UID: \"00f605a6-9423-43c8-905c-2b12505dc2fc\") " pod="openshift-marketplace/redhat-operators-h5j8f" Dec 08 19:34:00 crc kubenswrapper[5125]: I1208 19:34:00.305813 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7dhn6\" (UniqueName: \"kubernetes.io/projected/00f605a6-9423-43c8-905c-2b12505dc2fc-kube-api-access-7dhn6\") pod \"redhat-operators-h5j8f\" (UID: \"00f605a6-9423-43c8-905c-2b12505dc2fc\") " pod="openshift-marketplace/redhat-operators-h5j8f" Dec 08 19:34:00 crc kubenswrapper[5125]: I1208 19:34:00.305858 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00f605a6-9423-43c8-905c-2b12505dc2fc-utilities\") pod \"redhat-operators-h5j8f\" (UID: \"00f605a6-9423-43c8-905c-2b12505dc2fc\") " pod="openshift-marketplace/redhat-operators-h5j8f" Dec 08 19:34:00 crc kubenswrapper[5125]: I1208 19:34:00.306120 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00f605a6-9423-43c8-905c-2b12505dc2fc-catalog-content\") pod \"redhat-operators-h5j8f\" (UID: \"00f605a6-9423-43c8-905c-2b12505dc2fc\") " pod="openshift-marketplace/redhat-operators-h5j8f" Dec 08 19:34:00 crc kubenswrapper[5125]: I1208 19:34:00.306203 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00f605a6-9423-43c8-905c-2b12505dc2fc-utilities\") pod \"redhat-operators-h5j8f\" (UID: \"00f605a6-9423-43c8-905c-2b12505dc2fc\") " pod="openshift-marketplace/redhat-operators-h5j8f" Dec 08 19:34:00 crc kubenswrapper[5125]: I1208 19:34:00.330179 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dhn6\" (UniqueName: \"kubernetes.io/projected/00f605a6-9423-43c8-905c-2b12505dc2fc-kube-api-access-7dhn6\") pod \"redhat-operators-h5j8f\" (UID: \"00f605a6-9423-43c8-905c-2b12505dc2fc\") " pod="openshift-marketplace/redhat-operators-h5j8f" Dec 08 19:34:00 crc kubenswrapper[5125]: I1208 19:34:00.351917 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h5j8f" Dec 08 19:34:00 crc kubenswrapper[5125]: I1208 19:34:00.355987 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9gdtq"] Dec 08 19:34:00 crc kubenswrapper[5125]: I1208 19:34:00.538091 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h5j8f"] Dec 08 19:34:00 crc kubenswrapper[5125]: I1208 19:34:00.554875 5125 generic.go:358] "Generic (PLEG): container finished" podID="2ec2259e-fa77-483b-b9f7-09d483849e65" containerID="301f41b9e93ed0ad13fec6b8a2f8dc463d40eb5cddd85528f0f7dda215b3ebde" exitCode=0 Dec 08 19:34:00 crc kubenswrapper[5125]: I1208 19:34:00.555091 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9gdtq" event={"ID":"2ec2259e-fa77-483b-b9f7-09d483849e65","Type":"ContainerDied","Data":"301f41b9e93ed0ad13fec6b8a2f8dc463d40eb5cddd85528f0f7dda215b3ebde"} Dec 08 19:34:00 crc kubenswrapper[5125]: I1208 19:34:00.555136 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9gdtq" event={"ID":"2ec2259e-fa77-483b-b9f7-09d483849e65","Type":"ContainerStarted","Data":"758b26205c99180c22e30533018023cac389bbb108828052d4c1d2d1d0066626"} Dec 08 19:34:00 crc kubenswrapper[5125]: W1208 19:34:00.568318 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00f605a6_9423_43c8_905c_2b12505dc2fc.slice/crio-336275070c595940509577405bcaeb6be8cc27594fa8d04d15f980ddb39f62e8 WatchSource:0}: Error finding container 336275070c595940509577405bcaeb6be8cc27594fa8d04d15f980ddb39f62e8: Status 404 returned error can't find the container with id 336275070c595940509577405bcaeb6be8cc27594fa8d04d15f980ddb39f62e8 Dec 08 19:34:01 crc kubenswrapper[5125]: I1208 19:34:01.562438 5125 generic.go:358] "Generic (PLEG): container finished" podID="00f605a6-9423-43c8-905c-2b12505dc2fc" containerID="fc637e9dece3c714825a4962c4964be3c45a4bc089ad348b99ea8c58f8748e88" exitCode=0 Dec 08 19:34:01 crc kubenswrapper[5125]: I1208 19:34:01.562523 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h5j8f" event={"ID":"00f605a6-9423-43c8-905c-2b12505dc2fc","Type":"ContainerDied","Data":"fc637e9dece3c714825a4962c4964be3c45a4bc089ad348b99ea8c58f8748e88"} Dec 08 19:34:01 crc kubenswrapper[5125]: I1208 19:34:01.563006 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h5j8f" event={"ID":"00f605a6-9423-43c8-905c-2b12505dc2fc","Type":"ContainerStarted","Data":"336275070c595940509577405bcaeb6be8cc27594fa8d04d15f980ddb39f62e8"} Dec 08 19:34:01 crc kubenswrapper[5125]: I1208 19:34:01.566510 5125 generic.go:358] "Generic (PLEG): container finished" podID="2ec2259e-fa77-483b-b9f7-09d483849e65" containerID="36c3fe07e09621bebe08f6c717fd3e5a5aa49cade337590f48a48c8da1de9b36" exitCode=0 Dec 08 19:34:01 crc kubenswrapper[5125]: I1208 19:34:01.566667 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9gdtq" event={"ID":"2ec2259e-fa77-483b-b9f7-09d483849e65","Type":"ContainerDied","Data":"36c3fe07e09621bebe08f6c717fd3e5a5aa49cade337590f48a48c8da1de9b36"} Dec 08 19:34:02 crc kubenswrapper[5125]: I1208 19:34:02.216884 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w9v64"] Dec 08 19:34:02 crc kubenswrapper[5125]: I1208 19:34:02.229502 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w9v64" Dec 08 19:34:02 crc kubenswrapper[5125]: I1208 19:34:02.232017 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 08 19:34:02 crc kubenswrapper[5125]: I1208 19:34:02.240777 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w9v64"] Dec 08 19:34:02 crc kubenswrapper[5125]: I1208 19:34:02.330749 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7629a4c9-c75e-4523-af23-bde168421f14-utilities\") pod \"community-operators-w9v64\" (UID: \"7629a4c9-c75e-4523-af23-bde168421f14\") " pod="openshift-marketplace/community-operators-w9v64" Dec 08 19:34:02 crc kubenswrapper[5125]: I1208 19:34:02.330827 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7629a4c9-c75e-4523-af23-bde168421f14-catalog-content\") pod \"community-operators-w9v64\" (UID: \"7629a4c9-c75e-4523-af23-bde168421f14\") " pod="openshift-marketplace/community-operators-w9v64" Dec 08 19:34:02 crc kubenswrapper[5125]: I1208 19:34:02.330946 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trbr4\" (UniqueName: \"kubernetes.io/projected/7629a4c9-c75e-4523-af23-bde168421f14-kube-api-access-trbr4\") pod \"community-operators-w9v64\" (UID: \"7629a4c9-c75e-4523-af23-bde168421f14\") " pod="openshift-marketplace/community-operators-w9v64" Dec 08 19:34:02 crc kubenswrapper[5125]: I1208 19:34:02.413709 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zc9m8"] Dec 08 19:34:02 crc kubenswrapper[5125]: I1208 19:34:02.420741 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zc9m8" Dec 08 19:34:02 crc kubenswrapper[5125]: I1208 19:34:02.423210 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 08 19:34:02 crc kubenswrapper[5125]: I1208 19:34:02.431105 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zc9m8"] Dec 08 19:34:02 crc kubenswrapper[5125]: I1208 19:34:02.431719 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjb7p\" (UniqueName: \"kubernetes.io/projected/45f623b6-715e-49bc-a570-1bd15effb4f5-kube-api-access-sjb7p\") pod \"redhat-marketplace-zc9m8\" (UID: \"45f623b6-715e-49bc-a570-1bd15effb4f5\") " pod="openshift-marketplace/redhat-marketplace-zc9m8" Dec 08 19:34:02 crc kubenswrapper[5125]: I1208 19:34:02.431768 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45f623b6-715e-49bc-a570-1bd15effb4f5-utilities\") pod \"redhat-marketplace-zc9m8\" (UID: \"45f623b6-715e-49bc-a570-1bd15effb4f5\") " pod="openshift-marketplace/redhat-marketplace-zc9m8" Dec 08 19:34:02 crc kubenswrapper[5125]: I1208 19:34:02.431809 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-trbr4\" (UniqueName: \"kubernetes.io/projected/7629a4c9-c75e-4523-af23-bde168421f14-kube-api-access-trbr4\") pod \"community-operators-w9v64\" (UID: \"7629a4c9-c75e-4523-af23-bde168421f14\") " pod="openshift-marketplace/community-operators-w9v64" Dec 08 19:34:02 crc kubenswrapper[5125]: I1208 19:34:02.431866 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7629a4c9-c75e-4523-af23-bde168421f14-utilities\") pod \"community-operators-w9v64\" (UID: \"7629a4c9-c75e-4523-af23-bde168421f14\") " pod="openshift-marketplace/community-operators-w9v64" Dec 08 19:34:02 crc kubenswrapper[5125]: I1208 19:34:02.431898 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45f623b6-715e-49bc-a570-1bd15effb4f5-catalog-content\") pod \"redhat-marketplace-zc9m8\" (UID: \"45f623b6-715e-49bc-a570-1bd15effb4f5\") " pod="openshift-marketplace/redhat-marketplace-zc9m8" Dec 08 19:34:02 crc kubenswrapper[5125]: I1208 19:34:02.432219 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7629a4c9-c75e-4523-af23-bde168421f14-catalog-content\") pod \"community-operators-w9v64\" (UID: \"7629a4c9-c75e-4523-af23-bde168421f14\") " pod="openshift-marketplace/community-operators-w9v64" Dec 08 19:34:02 crc kubenswrapper[5125]: I1208 19:34:02.432368 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7629a4c9-c75e-4523-af23-bde168421f14-utilities\") pod \"community-operators-w9v64\" (UID: \"7629a4c9-c75e-4523-af23-bde168421f14\") " pod="openshift-marketplace/community-operators-w9v64" Dec 08 19:34:02 crc kubenswrapper[5125]: I1208 19:34:02.432646 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7629a4c9-c75e-4523-af23-bde168421f14-catalog-content\") pod \"community-operators-w9v64\" (UID: \"7629a4c9-c75e-4523-af23-bde168421f14\") " pod="openshift-marketplace/community-operators-w9v64" Dec 08 19:34:02 crc kubenswrapper[5125]: I1208 19:34:02.464715 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-trbr4\" (UniqueName: \"kubernetes.io/projected/7629a4c9-c75e-4523-af23-bde168421f14-kube-api-access-trbr4\") pod \"community-operators-w9v64\" (UID: \"7629a4c9-c75e-4523-af23-bde168421f14\") " pod="openshift-marketplace/community-operators-w9v64" Dec 08 19:34:02 crc kubenswrapper[5125]: I1208 19:34:02.539105 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sjb7p\" (UniqueName: \"kubernetes.io/projected/45f623b6-715e-49bc-a570-1bd15effb4f5-kube-api-access-sjb7p\") pod \"redhat-marketplace-zc9m8\" (UID: \"45f623b6-715e-49bc-a570-1bd15effb4f5\") " pod="openshift-marketplace/redhat-marketplace-zc9m8" Dec 08 19:34:02 crc kubenswrapper[5125]: I1208 19:34:02.539208 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45f623b6-715e-49bc-a570-1bd15effb4f5-utilities\") pod \"redhat-marketplace-zc9m8\" (UID: \"45f623b6-715e-49bc-a570-1bd15effb4f5\") " pod="openshift-marketplace/redhat-marketplace-zc9m8" Dec 08 19:34:02 crc kubenswrapper[5125]: I1208 19:34:02.539375 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45f623b6-715e-49bc-a570-1bd15effb4f5-catalog-content\") pod \"redhat-marketplace-zc9m8\" (UID: \"45f623b6-715e-49bc-a570-1bd15effb4f5\") " pod="openshift-marketplace/redhat-marketplace-zc9m8" Dec 08 19:34:02 crc kubenswrapper[5125]: I1208 19:34:02.540050 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45f623b6-715e-49bc-a570-1bd15effb4f5-catalog-content\") pod \"redhat-marketplace-zc9m8\" (UID: \"45f623b6-715e-49bc-a570-1bd15effb4f5\") " pod="openshift-marketplace/redhat-marketplace-zc9m8" Dec 08 19:34:02 crc kubenswrapper[5125]: I1208 19:34:02.540937 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45f623b6-715e-49bc-a570-1bd15effb4f5-utilities\") pod \"redhat-marketplace-zc9m8\" (UID: \"45f623b6-715e-49bc-a570-1bd15effb4f5\") " pod="openshift-marketplace/redhat-marketplace-zc9m8" Dec 08 19:34:02 crc kubenswrapper[5125]: I1208 19:34:02.550839 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w9v64" Dec 08 19:34:02 crc kubenswrapper[5125]: I1208 19:34:02.563234 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjb7p\" (UniqueName: \"kubernetes.io/projected/45f623b6-715e-49bc-a570-1bd15effb4f5-kube-api-access-sjb7p\") pod \"redhat-marketplace-zc9m8\" (UID: \"45f623b6-715e-49bc-a570-1bd15effb4f5\") " pod="openshift-marketplace/redhat-marketplace-zc9m8" Dec 08 19:34:02 crc kubenswrapper[5125]: I1208 19:34:02.574029 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h5j8f" event={"ID":"00f605a6-9423-43c8-905c-2b12505dc2fc","Type":"ContainerStarted","Data":"df40547e1f4357d28ccc7223e0ae38fd7698b8a180fc3dc0ee3f9736d1bd242f"} Dec 08 19:34:02 crc kubenswrapper[5125]: I1208 19:34:02.582168 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9gdtq" event={"ID":"2ec2259e-fa77-483b-b9f7-09d483849e65","Type":"ContainerStarted","Data":"f099c87451b984b82f501e1e21a0e215c0633edeb5c09e11f132fd6de85a9561"} Dec 08 19:34:02 crc kubenswrapper[5125]: I1208 19:34:02.623654 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9gdtq" podStartSLOduration=3.08525261 podStartE2EDuration="3.623640091s" podCreationTimestamp="2025-12-08 19:33:59 +0000 UTC" firstStartedPulling="2025-12-08 19:34:00.556021364 +0000 UTC m=+297.326511648" lastFinishedPulling="2025-12-08 19:34:01.094408855 +0000 UTC m=+297.864899129" observedRunningTime="2025-12-08 19:34:02.621590795 +0000 UTC m=+299.392081079" watchObservedRunningTime="2025-12-08 19:34:02.623640091 +0000 UTC m=+299.394130365" Dec 08 19:34:02 crc kubenswrapper[5125]: I1208 19:34:02.737067 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zc9m8" Dec 08 19:34:02 crc kubenswrapper[5125]: I1208 19:34:02.794240 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w9v64"] Dec 08 19:34:02 crc kubenswrapper[5125]: W1208 19:34:02.802207 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7629a4c9_c75e_4523_af23_bde168421f14.slice/crio-b554a6a0b8af42249e3243087bfe2fbae23d518dfee5fa408c572982e913a534 WatchSource:0}: Error finding container b554a6a0b8af42249e3243087bfe2fbae23d518dfee5fa408c572982e913a534: Status 404 returned error can't find the container with id b554a6a0b8af42249e3243087bfe2fbae23d518dfee5fa408c572982e913a534 Dec 08 19:34:02 crc kubenswrapper[5125]: I1208 19:34:02.963374 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zc9m8"] Dec 08 19:34:03 crc kubenswrapper[5125]: W1208 19:34:03.091466 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45f623b6_715e_49bc_a570_1bd15effb4f5.slice/crio-2da2208964f0a8f01d3f06d67a34b56e6e3d669055e7344cd030e8f75a17c018 WatchSource:0}: Error finding container 2da2208964f0a8f01d3f06d67a34b56e6e3d669055e7344cd030e8f75a17c018: Status 404 returned error can't find the container with id 2da2208964f0a8f01d3f06d67a34b56e6e3d669055e7344cd030e8f75a17c018 Dec 08 19:34:03 crc kubenswrapper[5125]: I1208 19:34:03.592081 5125 generic.go:358] "Generic (PLEG): container finished" podID="7629a4c9-c75e-4523-af23-bde168421f14" containerID="2326b4c55af25a9c5f36b1c900cceadb147c36c831283aae92e5961ab1bbf85e" exitCode=0 Dec 08 19:34:03 crc kubenswrapper[5125]: I1208 19:34:03.592199 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9v64" event={"ID":"7629a4c9-c75e-4523-af23-bde168421f14","Type":"ContainerDied","Data":"2326b4c55af25a9c5f36b1c900cceadb147c36c831283aae92e5961ab1bbf85e"} Dec 08 19:34:03 crc kubenswrapper[5125]: I1208 19:34:03.592367 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9v64" event={"ID":"7629a4c9-c75e-4523-af23-bde168421f14","Type":"ContainerStarted","Data":"b554a6a0b8af42249e3243087bfe2fbae23d518dfee5fa408c572982e913a534"} Dec 08 19:34:03 crc kubenswrapper[5125]: I1208 19:34:03.594286 5125 generic.go:358] "Generic (PLEG): container finished" podID="45f623b6-715e-49bc-a570-1bd15effb4f5" containerID="13ba0e7f154e48ac828db7f7f5d3fe68ede8bfbdd6535a66efbe94d63500d64e" exitCode=0 Dec 08 19:34:03 crc kubenswrapper[5125]: I1208 19:34:03.594348 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zc9m8" event={"ID":"45f623b6-715e-49bc-a570-1bd15effb4f5","Type":"ContainerDied","Data":"13ba0e7f154e48ac828db7f7f5d3fe68ede8bfbdd6535a66efbe94d63500d64e"} Dec 08 19:34:03 crc kubenswrapper[5125]: I1208 19:34:03.594402 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zc9m8" event={"ID":"45f623b6-715e-49bc-a570-1bd15effb4f5","Type":"ContainerStarted","Data":"2da2208964f0a8f01d3f06d67a34b56e6e3d669055e7344cd030e8f75a17c018"} Dec 08 19:34:03 crc kubenswrapper[5125]: I1208 19:34:03.598247 5125 generic.go:358] "Generic (PLEG): container finished" podID="00f605a6-9423-43c8-905c-2b12505dc2fc" containerID="df40547e1f4357d28ccc7223e0ae38fd7698b8a180fc3dc0ee3f9736d1bd242f" exitCode=0 Dec 08 19:34:03 crc kubenswrapper[5125]: I1208 19:34:03.598400 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h5j8f" event={"ID":"00f605a6-9423-43c8-905c-2b12505dc2fc","Type":"ContainerDied","Data":"df40547e1f4357d28ccc7223e0ae38fd7698b8a180fc3dc0ee3f9736d1bd242f"} Dec 08 19:34:03 crc kubenswrapper[5125]: I1208 19:34:03.891064 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 19:34:03 crc kubenswrapper[5125]: I1208 19:34:03.891679 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 19:34:04 crc kubenswrapper[5125]: I1208 19:34:04.609226 5125 generic.go:358] "Generic (PLEG): container finished" podID="7629a4c9-c75e-4523-af23-bde168421f14" containerID="f6e0556e359d1f31b04b563feab2ce857ee6d513d2cf50cc41490f39298cd565" exitCode=0 Dec 08 19:34:04 crc kubenswrapper[5125]: I1208 19:34:04.609353 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9v64" event={"ID":"7629a4c9-c75e-4523-af23-bde168421f14","Type":"ContainerDied","Data":"f6e0556e359d1f31b04b563feab2ce857ee6d513d2cf50cc41490f39298cd565"} Dec 08 19:34:04 crc kubenswrapper[5125]: I1208 19:34:04.611367 5125 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 19:34:04 crc kubenswrapper[5125]: I1208 19:34:04.613453 5125 generic.go:358] "Generic (PLEG): container finished" podID="45f623b6-715e-49bc-a570-1bd15effb4f5" containerID="d4a9c850c83720c0b1f939b9bca5ec0651b4f40eb50205066795aee541c91452" exitCode=0 Dec 08 19:34:04 crc kubenswrapper[5125]: I1208 19:34:04.613581 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zc9m8" event={"ID":"45f623b6-715e-49bc-a570-1bd15effb4f5","Type":"ContainerDied","Data":"d4a9c850c83720c0b1f939b9bca5ec0651b4f40eb50205066795aee541c91452"} Dec 08 19:34:04 crc kubenswrapper[5125]: I1208 19:34:04.618452 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h5j8f" event={"ID":"00f605a6-9423-43c8-905c-2b12505dc2fc","Type":"ContainerStarted","Data":"def8d50601dd5b964c05c2177d4147227ddaf62d8d2490554ec914a0f8b63813"} Dec 08 19:34:04 crc kubenswrapper[5125]: I1208 19:34:04.677421 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h5j8f" podStartSLOduration=4.956710967 podStartE2EDuration="5.677402481s" podCreationTimestamp="2025-12-08 19:33:59 +0000 UTC" firstStartedPulling="2025-12-08 19:34:01.563508895 +0000 UTC m=+298.333999179" lastFinishedPulling="2025-12-08 19:34:02.284200419 +0000 UTC m=+299.054690693" observedRunningTime="2025-12-08 19:34:04.67076061 +0000 UTC m=+301.441250924" watchObservedRunningTime="2025-12-08 19:34:04.677402481 +0000 UTC m=+301.447892755" Dec 08 19:34:05 crc kubenswrapper[5125]: I1208 19:34:05.628703 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9v64" event={"ID":"7629a4c9-c75e-4523-af23-bde168421f14","Type":"ContainerStarted","Data":"17aefdc3e49d6e668c385190dbd6d39634dd69f7f0fccf8be23bc06226a6fd12"} Dec 08 19:34:05 crc kubenswrapper[5125]: I1208 19:34:05.632917 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zc9m8" event={"ID":"45f623b6-715e-49bc-a570-1bd15effb4f5","Type":"ContainerStarted","Data":"82365a532581dbff147b4fecbde17df6ef597ce16c4d2af233e94ba7124566d5"} Dec 08 19:34:05 crc kubenswrapper[5125]: I1208 19:34:05.651012 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w9v64" podStartSLOduration=3.085958508 podStartE2EDuration="3.650987576s" podCreationTimestamp="2025-12-08 19:34:02 +0000 UTC" firstStartedPulling="2025-12-08 19:34:03.597056192 +0000 UTC m=+300.367546496" lastFinishedPulling="2025-12-08 19:34:04.16208529 +0000 UTC m=+300.932575564" observedRunningTime="2025-12-08 19:34:05.647681016 +0000 UTC m=+302.418171340" watchObservedRunningTime="2025-12-08 19:34:05.650987576 +0000 UTC m=+302.421477850" Dec 08 19:34:05 crc kubenswrapper[5125]: I1208 19:34:05.671690 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zc9m8" podStartSLOduration=3.165541351 podStartE2EDuration="3.671668681s" podCreationTimestamp="2025-12-08 19:34:02 +0000 UTC" firstStartedPulling="2025-12-08 19:34:03.595309365 +0000 UTC m=+300.365799649" lastFinishedPulling="2025-12-08 19:34:04.101436705 +0000 UTC m=+300.871926979" observedRunningTime="2025-12-08 19:34:05.670736915 +0000 UTC m=+302.441227199" watchObservedRunningTime="2025-12-08 19:34:05.671668681 +0000 UTC m=+302.442158955" Dec 08 19:34:07 crc kubenswrapper[5125]: I1208 19:34:07.302506 5125 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 08 19:34:10 crc kubenswrapper[5125]: I1208 19:34:10.162449 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-9gdtq" Dec 08 19:34:10 crc kubenswrapper[5125]: I1208 19:34:10.162518 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9gdtq" Dec 08 19:34:10 crc kubenswrapper[5125]: I1208 19:34:10.207463 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9gdtq" Dec 08 19:34:10 crc kubenswrapper[5125]: I1208 19:34:10.353434 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-h5j8f" Dec 08 19:34:10 crc kubenswrapper[5125]: I1208 19:34:10.353565 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h5j8f" Dec 08 19:34:10 crc kubenswrapper[5125]: I1208 19:34:10.384735 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h5j8f" Dec 08 19:34:10 crc kubenswrapper[5125]: I1208 19:34:10.705485 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h5j8f" Dec 08 19:34:10 crc kubenswrapper[5125]: I1208 19:34:10.718122 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9gdtq" Dec 08 19:34:12 crc kubenswrapper[5125]: I1208 19:34:12.551673 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-w9v64" Dec 08 19:34:12 crc kubenswrapper[5125]: I1208 19:34:12.552201 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w9v64" Dec 08 19:34:12 crc kubenswrapper[5125]: I1208 19:34:12.613132 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w9v64" Dec 08 19:34:12 crc kubenswrapper[5125]: I1208 19:34:12.709553 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w9v64" Dec 08 19:34:12 crc kubenswrapper[5125]: I1208 19:34:12.737812 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zc9m8" Dec 08 19:34:12 crc kubenswrapper[5125]: I1208 19:34:12.737857 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-zc9m8" Dec 08 19:34:12 crc kubenswrapper[5125]: I1208 19:34:12.774960 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zc9m8" Dec 08 19:34:13 crc kubenswrapper[5125]: I1208 19:34:13.735808 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zc9m8" Dec 08 19:35:51 crc kubenswrapper[5125]: I1208 19:35:51.101042 5125 patch_prober.go:28] interesting pod/machine-config-daemon-slhjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:35:51 crc kubenswrapper[5125]: I1208 19:35:51.101717 5125 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" podUID="d8cea827-b8e3-4d92-adea-df0afd2397da" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:36:21 crc kubenswrapper[5125]: I1208 19:36:21.101246 5125 patch_prober.go:28] interesting pod/machine-config-daemon-slhjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:36:21 crc kubenswrapper[5125]: I1208 19:36:21.101917 5125 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" podUID="d8cea827-b8e3-4d92-adea-df0afd2397da" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:36:51 crc kubenswrapper[5125]: I1208 19:36:51.101410 5125 patch_prober.go:28] interesting pod/machine-config-daemon-slhjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:36:51 crc kubenswrapper[5125]: I1208 19:36:51.101961 5125 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" podUID="d8cea827-b8e3-4d92-adea-df0afd2397da" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:36:51 crc kubenswrapper[5125]: I1208 19:36:51.102025 5125 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" Dec 08 19:36:51 crc kubenswrapper[5125]: I1208 19:36:51.102722 5125 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"47c3c7b274e1f8fb2e42d6843b6c70142b9720f62299f0a9859e9a777dd9f1a9"} pod="openshift-machine-config-operator/machine-config-daemon-slhjr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 19:36:51 crc kubenswrapper[5125]: I1208 19:36:51.102791 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" podUID="d8cea827-b8e3-4d92-adea-df0afd2397da" containerName="machine-config-daemon" containerID="cri-o://47c3c7b274e1f8fb2e42d6843b6c70142b9720f62299f0a9859e9a777dd9f1a9" gracePeriod=600 Dec 08 19:36:51 crc kubenswrapper[5125]: I1208 19:36:51.655217 5125 generic.go:358] "Generic (PLEG): container finished" podID="d8cea827-b8e3-4d92-adea-df0afd2397da" containerID="47c3c7b274e1f8fb2e42d6843b6c70142b9720f62299f0a9859e9a777dd9f1a9" exitCode=0 Dec 08 19:36:51 crc kubenswrapper[5125]: I1208 19:36:51.655648 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" event={"ID":"d8cea827-b8e3-4d92-adea-df0afd2397da","Type":"ContainerDied","Data":"47c3c7b274e1f8fb2e42d6843b6c70142b9720f62299f0a9859e9a777dd9f1a9"} Dec 08 19:36:51 crc kubenswrapper[5125]: I1208 19:36:51.655673 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" event={"ID":"d8cea827-b8e3-4d92-adea-df0afd2397da","Type":"ContainerStarted","Data":"3eaff9ff574646a35fa068c19d68106caffff9d6e28141d09b7049a7e34edb72"} Dec 08 19:36:51 crc kubenswrapper[5125]: I1208 19:36:51.655688 5125 scope.go:117] "RemoveContainer" containerID="a86a0816bac7ca3fa402c6544237e9e92be21df715faf34c0d65ab20b3280854" Dec 08 19:38:04 crc kubenswrapper[5125]: I1208 19:38:04.080094 5125 scope.go:117] "RemoveContainer" containerID="146d7b3f8a4beacbb9cbf12333032fde5cc05be086e8c0df72f7e18f5eed9831" Dec 08 19:38:04 crc kubenswrapper[5125]: I1208 19:38:04.101204 5125 scope.go:117] "RemoveContainer" containerID="384d2a91ada797587ea0f803ee515614431b5d2ea043bf40416ad323b80e544a" Dec 08 19:38:04 crc kubenswrapper[5125]: I1208 19:38:04.120659 5125 scope.go:117] "RemoveContainer" containerID="5ea6a05bd5769663fd159e6c1bb044daf2eb85ed7544ddad6f5817224125cb9d" Dec 08 19:38:51 crc kubenswrapper[5125]: I1208 19:38:51.101985 5125 patch_prober.go:28] interesting pod/machine-config-daemon-slhjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:38:51 crc kubenswrapper[5125]: I1208 19:38:51.102680 5125 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" podUID="d8cea827-b8e3-4d92-adea-df0afd2397da" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:39:03 crc kubenswrapper[5125]: I1208 19:39:03.965226 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 19:39:03 crc kubenswrapper[5125]: I1208 19:39:03.967684 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.596117 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx"] Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.596399 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" podUID="48d0e864-6620-4a75-baa4-8653836f3aab" containerName="kube-rbac-proxy" containerID="cri-o://16e1ad7ce234905f668415641ca07de1f1c979cfa934d9f44009b0809d0096a9" gracePeriod=30 Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.596789 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" podUID="48d0e864-6620-4a75-baa4-8653836f3aab" containerName="ovnkube-cluster-manager" containerID="cri-o://b20b0a9605f05d0adc59fb9552e2669c3781c6b2a3e5d64103d79ca5707cf336" gracePeriod=30 Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.774260 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.810285 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-b869m"] Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.810826 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="48d0e864-6620-4a75-baa4-8653836f3aab" containerName="ovnkube-cluster-manager" Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.810914 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="48d0e864-6620-4a75-baa4-8653836f3aab" containerName="ovnkube-cluster-manager" Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.810939 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="48d0e864-6620-4a75-baa4-8653836f3aab" containerName="kube-rbac-proxy" Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.810947 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="48d0e864-6620-4a75-baa4-8653836f3aab" containerName="kube-rbac-proxy" Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.811054 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="48d0e864-6620-4a75-baa4-8653836f3aab" containerName="ovnkube-cluster-manager" Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.811084 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="48d0e864-6620-4a75-baa4-8653836f3aab" containerName="kube-rbac-proxy" Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.814735 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-b869m" Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.817846 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-k9whn"] Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.818513 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="ovn-controller" containerID="cri-o://9792ded106488269b52844056dd1b2e9d47a61d8fc8ac11b8e875d095bdcf100" gracePeriod=30 Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.818563 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="sbdb" containerID="cri-o://b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1" gracePeriod=30 Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.818552 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="northd" containerID="cri-o://851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3" gracePeriod=30 Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.818620 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="nbdb" containerID="cri-o://6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa" gracePeriod=30 Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.818635 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="kube-rbac-proxy-node" containerID="cri-o://36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9" gracePeriod=30 Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.818690 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea" gracePeriod=30 Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.818672 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="ovn-acl-logging" containerID="cri-o://f2f2e6b44b7da40680601e09cfc2ac282135d38bd2cc2a03bdbacfafbc77cebe" gracePeriod=30 Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.859120 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="ovnkube-controller" containerID="cri-o://7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41" gracePeriod=30 Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.897974 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/48d0e864-6620-4a75-baa4-8653836f3aab-ovnkube-config\") pod \"48d0e864-6620-4a75-baa4-8653836f3aab\" (UID: \"48d0e864-6620-4a75-baa4-8653836f3aab\") " Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.898046 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvrb\" (UniqueName: \"kubernetes.io/projected/48d0e864-6620-4a75-baa4-8653836f3aab-kube-api-access-twvrb\") pod \"48d0e864-6620-4a75-baa4-8653836f3aab\" (UID: \"48d0e864-6620-4a75-baa4-8653836f3aab\") " Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.898064 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/48d0e864-6620-4a75-baa4-8653836f3aab-env-overrides\") pod \"48d0e864-6620-4a75-baa4-8653836f3aab\" (UID: \"48d0e864-6620-4a75-baa4-8653836f3aab\") " Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.898154 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/48d0e864-6620-4a75-baa4-8653836f3aab-ovn-control-plane-metrics-cert\") pod \"48d0e864-6620-4a75-baa4-8653836f3aab\" (UID: \"48d0e864-6620-4a75-baa4-8653836f3aab\") " Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.898265 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3aaa7c67-0452-440f-8998-6ffa475eff9f-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-b869m\" (UID: \"3aaa7c67-0452-440f-8998-6ffa475eff9f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-b869m" Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.898285 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3aaa7c67-0452-440f-8998-6ffa475eff9f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-b869m\" (UID: \"3aaa7c67-0452-440f-8998-6ffa475eff9f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-b869m" Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.898332 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td8gg\" (UniqueName: \"kubernetes.io/projected/3aaa7c67-0452-440f-8998-6ffa475eff9f-kube-api-access-td8gg\") pod \"ovnkube-control-plane-97c9b6c48-b869m\" (UID: \"3aaa7c67-0452-440f-8998-6ffa475eff9f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-b869m" Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.898413 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3aaa7c67-0452-440f-8998-6ffa475eff9f-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-b869m\" (UID: \"3aaa7c67-0452-440f-8998-6ffa475eff9f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-b869m" Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.899095 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48d0e864-6620-4a75-baa4-8653836f3aab-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "48d0e864-6620-4a75-baa4-8653836f3aab" (UID: "48d0e864-6620-4a75-baa4-8653836f3aab"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.899212 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48d0e864-6620-4a75-baa4-8653836f3aab-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "48d0e864-6620-4a75-baa4-8653836f3aab" (UID: "48d0e864-6620-4a75-baa4-8653836f3aab"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.914218 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48d0e864-6620-4a75-baa4-8653836f3aab-kube-api-access-twvrb" (OuterVolumeSpecName: "kube-api-access-twvrb") pod "48d0e864-6620-4a75-baa4-8653836f3aab" (UID: "48d0e864-6620-4a75-baa4-8653836f3aab"). InnerVolumeSpecName "kube-api-access-twvrb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.918105 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48d0e864-6620-4a75-baa4-8653836f3aab-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "48d0e864-6620-4a75-baa4-8653836f3aab" (UID: "48d0e864-6620-4a75-baa4-8653836f3aab"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:39:04 crc kubenswrapper[5125]: I1208 19:39:04.999846 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3aaa7c67-0452-440f-8998-6ffa475eff9f-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-b869m\" (UID: \"3aaa7c67-0452-440f-8998-6ffa475eff9f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-b869m" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:04.999915 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3aaa7c67-0452-440f-8998-6ffa475eff9f-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-b869m\" (UID: \"3aaa7c67-0452-440f-8998-6ffa475eff9f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-b869m" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:04.999939 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3aaa7c67-0452-440f-8998-6ffa475eff9f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-b869m\" (UID: \"3aaa7c67-0452-440f-8998-6ffa475eff9f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-b869m" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.000052 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-td8gg\" (UniqueName: \"kubernetes.io/projected/3aaa7c67-0452-440f-8998-6ffa475eff9f-kube-api-access-td8gg\") pod \"ovnkube-control-plane-97c9b6c48-b869m\" (UID: \"3aaa7c67-0452-440f-8998-6ffa475eff9f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-b869m" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.000112 5125 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/48d0e864-6620-4a75-baa4-8653836f3aab-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.000131 5125 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/48d0e864-6620-4a75-baa4-8653836f3aab-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.000145 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvrb\" (UniqueName: \"kubernetes.io/projected/48d0e864-6620-4a75-baa4-8653836f3aab-kube-api-access-twvrb\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.000158 5125 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/48d0e864-6620-4a75-baa4-8653836f3aab-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.000594 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3aaa7c67-0452-440f-8998-6ffa475eff9f-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-b869m\" (UID: \"3aaa7c67-0452-440f-8998-6ffa475eff9f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-b869m" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.000679 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3aaa7c67-0452-440f-8998-6ffa475eff9f-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-b869m\" (UID: \"3aaa7c67-0452-440f-8998-6ffa475eff9f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-b869m" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.004890 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3aaa7c67-0452-440f-8998-6ffa475eff9f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-b869m\" (UID: \"3aaa7c67-0452-440f-8998-6ffa475eff9f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-b869m" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.016177 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-td8gg\" (UniqueName: \"kubernetes.io/projected/3aaa7c67-0452-440f-8998-6ffa475eff9f-kube-api-access-td8gg\") pod \"ovnkube-control-plane-97c9b6c48-b869m\" (UID: \"3aaa7c67-0452-440f-8998-6ffa475eff9f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-b869m" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.144840 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-b869m" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.180885 5125 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.513501 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k9whn_aabf1825-0c19-45de-9f9e-fe94777752e6/ovn-acl-logging/0.log" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.514446 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k9whn_aabf1825-0c19-45de-9f9e-fe94777752e6/ovn-controller/0.log" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.515011 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.565130 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5xns9"] Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.565763 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="sbdb" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.565783 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="sbdb" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.565795 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="northd" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.565803 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="northd" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.565817 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="kube-rbac-proxy-node" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.565826 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="kube-rbac-proxy-node" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.565835 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="kube-rbac-proxy-ovn-metrics" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.565843 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="kube-rbac-proxy-ovn-metrics" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.565862 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="ovn-acl-logging" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.565870 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="ovn-acl-logging" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.565884 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="nbdb" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.565891 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="nbdb" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.565902 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="ovn-controller" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.565912 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="ovn-controller" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.565922 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="ovnkube-controller" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.565930 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="ovnkube-controller" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.565957 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="kubecfg-setup" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.565964 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="kubecfg-setup" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.566061 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="nbdb" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.566076 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="sbdb" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.566085 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="ovn-acl-logging" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.566096 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="kube-rbac-proxy-ovn-metrics" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.566108 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="kube-rbac-proxy-node" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.566117 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="ovnkube-controller" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.566125 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="northd" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.566139 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerName="ovn-controller" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.571414 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.608043 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-run-netns\") pod \"aabf1825-0c19-45de-9f9e-fe94777752e6\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.608096 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-log-socket\") pod \"aabf1825-0c19-45de-9f9e-fe94777752e6\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.608125 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-run-openvswitch\") pod \"aabf1825-0c19-45de-9f9e-fe94777752e6\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.608152 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-slash\") pod \"aabf1825-0c19-45de-9f9e-fe94777752e6\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.608195 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "aabf1825-0c19-45de-9f9e-fe94777752e6" (UID: "aabf1825-0c19-45de-9f9e-fe94777752e6"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.608217 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-var-lib-openvswitch\") pod \"aabf1825-0c19-45de-9f9e-fe94777752e6\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.608237 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-log-socket" (OuterVolumeSpecName: "log-socket") pod "aabf1825-0c19-45de-9f9e-fe94777752e6" (UID: "aabf1825-0c19-45de-9f9e-fe94777752e6"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.608272 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aabf1825-0c19-45de-9f9e-fe94777752e6-ovnkube-config\") pod \"aabf1825-0c19-45de-9f9e-fe94777752e6\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.608301 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-node-log\") pod \"aabf1825-0c19-45de-9f9e-fe94777752e6\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.608346 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aabf1825-0c19-45de-9f9e-fe94777752e6-env-overrides\") pod \"aabf1825-0c19-45de-9f9e-fe94777752e6\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.608363 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-cni-netd\") pod \"aabf1825-0c19-45de-9f9e-fe94777752e6\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.608386 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42xvf\" (UniqueName: \"kubernetes.io/projected/aabf1825-0c19-45de-9f9e-fe94777752e6-kube-api-access-42xvf\") pod \"aabf1825-0c19-45de-9f9e-fe94777752e6\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.608300 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-slash" (OuterVolumeSpecName: "host-slash") pod "aabf1825-0c19-45de-9f9e-fe94777752e6" (UID: "aabf1825-0c19-45de-9f9e-fe94777752e6"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.608323 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "aabf1825-0c19-45de-9f9e-fe94777752e6" (UID: "aabf1825-0c19-45de-9f9e-fe94777752e6"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.608369 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-node-log" (OuterVolumeSpecName: "node-log") pod "aabf1825-0c19-45de-9f9e-fe94777752e6" (UID: "aabf1825-0c19-45de-9f9e-fe94777752e6"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.608424 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/aabf1825-0c19-45de-9f9e-fe94777752e6-ovnkube-script-lib\") pod \"aabf1825-0c19-45de-9f9e-fe94777752e6\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.608478 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-systemd-units\") pod \"aabf1825-0c19-45de-9f9e-fe94777752e6\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.608539 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"aabf1825-0c19-45de-9f9e-fe94777752e6\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.608600 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-etc-openvswitch\") pod \"aabf1825-0c19-45de-9f9e-fe94777752e6\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.608655 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-kubelet\") pod \"aabf1825-0c19-45de-9f9e-fe94777752e6\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.608699 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-cni-bin\") pod \"aabf1825-0c19-45de-9f9e-fe94777752e6\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.608737 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-run-ovn\") pod \"aabf1825-0c19-45de-9f9e-fe94777752e6\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.608775 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-run-systemd\") pod \"aabf1825-0c19-45de-9f9e-fe94777752e6\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.608800 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-run-ovn-kubernetes\") pod \"aabf1825-0c19-45de-9f9e-fe94777752e6\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.608859 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aabf1825-0c19-45de-9f9e-fe94777752e6-ovn-node-metrics-cert\") pod \"aabf1825-0c19-45de-9f9e-fe94777752e6\" (UID: \"aabf1825-0c19-45de-9f9e-fe94777752e6\") " Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.609060 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aabf1825-0c19-45de-9f9e-fe94777752e6-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "aabf1825-0c19-45de-9f9e-fe94777752e6" (UID: "aabf1825-0c19-45de-9f9e-fe94777752e6"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.609087 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "aabf1825-0c19-45de-9f9e-fe94777752e6" (UID: "aabf1825-0c19-45de-9f9e-fe94777752e6"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.609123 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "aabf1825-0c19-45de-9f9e-fe94777752e6" (UID: "aabf1825-0c19-45de-9f9e-fe94777752e6"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.609164 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "aabf1825-0c19-45de-9f9e-fe94777752e6" (UID: "aabf1825-0c19-45de-9f9e-fe94777752e6"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.609193 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "aabf1825-0c19-45de-9f9e-fe94777752e6" (UID: "aabf1825-0c19-45de-9f9e-fe94777752e6"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.609209 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "aabf1825-0c19-45de-9f9e-fe94777752e6" (UID: "aabf1825-0c19-45de-9f9e-fe94777752e6"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.609226 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "aabf1825-0c19-45de-9f9e-fe94777752e6" (UID: "aabf1825-0c19-45de-9f9e-fe94777752e6"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.609243 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "aabf1825-0c19-45de-9f9e-fe94777752e6" (UID: "aabf1825-0c19-45de-9f9e-fe94777752e6"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.609322 5125 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.609332 5125 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.609341 5125 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.609350 5125 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.609357 5125 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.609365 5125 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.609374 5125 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-log-socket\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.609374 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aabf1825-0c19-45de-9f9e-fe94777752e6-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "aabf1825-0c19-45de-9f9e-fe94777752e6" (UID: "aabf1825-0c19-45de-9f9e-fe94777752e6"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.609381 5125 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.609394 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "aabf1825-0c19-45de-9f9e-fe94777752e6" (UID: "aabf1825-0c19-45de-9f9e-fe94777752e6"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.609411 5125 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-slash\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.609422 5125 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.609431 5125 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-node-log\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.609441 5125 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/aabf1825-0c19-45de-9f9e-fe94777752e6-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.609466 5125 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.609513 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "aabf1825-0c19-45de-9f9e-fe94777752e6" (UID: "aabf1825-0c19-45de-9f9e-fe94777752e6"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.609554 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aabf1825-0c19-45de-9f9e-fe94777752e6-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "aabf1825-0c19-45de-9f9e-fe94777752e6" (UID: "aabf1825-0c19-45de-9f9e-fe94777752e6"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.613044 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aabf1825-0c19-45de-9f9e-fe94777752e6-kube-api-access-42xvf" (OuterVolumeSpecName: "kube-api-access-42xvf") pod "aabf1825-0c19-45de-9f9e-fe94777752e6" (UID: "aabf1825-0c19-45de-9f9e-fe94777752e6"). InnerVolumeSpecName "kube-api-access-42xvf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.613290 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aabf1825-0c19-45de-9f9e-fe94777752e6-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "aabf1825-0c19-45de-9f9e-fe94777752e6" (UID: "aabf1825-0c19-45de-9f9e-fe94777752e6"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.620794 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "aabf1825-0c19-45de-9f9e-fe94777752e6" (UID: "aabf1825-0c19-45de-9f9e-fe94777752e6"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.663429 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9p7g8_b938d768-ccce-45a6-a982-3f5d6f1a7d98/kube-multus/0.log" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.663479 5125 generic.go:358] "Generic (PLEG): container finished" podID="b938d768-ccce-45a6-a982-3f5d6f1a7d98" containerID="eeb6fe61b3247454c6b9d9e1e48175ecc5e5ad0e231b045d3a5f6ac83cef9e81" exitCode=2 Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.663517 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9p7g8" event={"ID":"b938d768-ccce-45a6-a982-3f5d6f1a7d98","Type":"ContainerDied","Data":"eeb6fe61b3247454c6b9d9e1e48175ecc5e5ad0e231b045d3a5f6ac83cef9e81"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.664187 5125 scope.go:117] "RemoveContainer" containerID="eeb6fe61b3247454c6b9d9e1e48175ecc5e5ad0e231b045d3a5f6ac83cef9e81" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.668340 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k9whn_aabf1825-0c19-45de-9f9e-fe94777752e6/ovn-acl-logging/0.log" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.669152 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-k9whn_aabf1825-0c19-45de-9f9e-fe94777752e6/ovn-controller/0.log" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.669515 5125 generic.go:358] "Generic (PLEG): container finished" podID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerID="7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41" exitCode=0 Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.669538 5125 generic.go:358] "Generic (PLEG): container finished" podID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerID="b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1" exitCode=0 Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.669547 5125 generic.go:358] "Generic (PLEG): container finished" podID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerID="6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa" exitCode=0 Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.669557 5125 generic.go:358] "Generic (PLEG): container finished" podID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerID="851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3" exitCode=0 Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.669564 5125 generic.go:358] "Generic (PLEG): container finished" podID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerID="3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea" exitCode=0 Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.669572 5125 generic.go:358] "Generic (PLEG): container finished" podID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerID="36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9" exitCode=0 Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.669579 5125 generic.go:358] "Generic (PLEG): container finished" podID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerID="f2f2e6b44b7da40680601e09cfc2ac282135d38bd2cc2a03bdbacfafbc77cebe" exitCode=143 Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.669586 5125 generic.go:358] "Generic (PLEG): container finished" podID="aabf1825-0c19-45de-9f9e-fe94777752e6" containerID="9792ded106488269b52844056dd1b2e9d47a61d8fc8ac11b8e875d095bdcf100" exitCode=143 Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.669630 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" event={"ID":"aabf1825-0c19-45de-9f9e-fe94777752e6","Type":"ContainerDied","Data":"7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.669681 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" event={"ID":"aabf1825-0c19-45de-9f9e-fe94777752e6","Type":"ContainerDied","Data":"b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.669707 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" event={"ID":"aabf1825-0c19-45de-9f9e-fe94777752e6","Type":"ContainerDied","Data":"6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.669726 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" event={"ID":"aabf1825-0c19-45de-9f9e-fe94777752e6","Type":"ContainerDied","Data":"851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.669743 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" event={"ID":"aabf1825-0c19-45de-9f9e-fe94777752e6","Type":"ContainerDied","Data":"3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.669752 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.669773 5125 scope.go:117] "RemoveContainer" containerID="7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.669758 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" event={"ID":"aabf1825-0c19-45de-9f9e-fe94777752e6","Type":"ContainerDied","Data":"36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.669886 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f2f2e6b44b7da40680601e09cfc2ac282135d38bd2cc2a03bdbacfafbc77cebe"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.669899 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9792ded106488269b52844056dd1b2e9d47a61d8fc8ac11b8e875d095bdcf100"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.669906 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.669922 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" event={"ID":"aabf1825-0c19-45de-9f9e-fe94777752e6","Type":"ContainerDied","Data":"f2f2e6b44b7da40680601e09cfc2ac282135d38bd2cc2a03bdbacfafbc77cebe"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.669935 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.669942 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.669949 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.669955 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.669961 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.669967 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.669973 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f2f2e6b44b7da40680601e09cfc2ac282135d38bd2cc2a03bdbacfafbc77cebe"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.669978 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9792ded106488269b52844056dd1b2e9d47a61d8fc8ac11b8e875d095bdcf100"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.669984 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.669993 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" event={"ID":"aabf1825-0c19-45de-9f9e-fe94777752e6","Type":"ContainerDied","Data":"9792ded106488269b52844056dd1b2e9d47a61d8fc8ac11b8e875d095bdcf100"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.670002 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.670009 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.670015 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.670021 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.670026 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.670032 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.670038 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f2f2e6b44b7da40680601e09cfc2ac282135d38bd2cc2a03bdbacfafbc77cebe"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.670045 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9792ded106488269b52844056dd1b2e9d47a61d8fc8ac11b8e875d095bdcf100"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.670051 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.670060 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-k9whn" event={"ID":"aabf1825-0c19-45de-9f9e-fe94777752e6","Type":"ContainerDied","Data":"16a138870cb1cb6faefb39f54dd2ff08c6cb551426f96e4cbb951d7d47850407"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.670069 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.670086 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.670093 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.670099 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.670106 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.670112 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.670118 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f2f2e6b44b7da40680601e09cfc2ac282135d38bd2cc2a03bdbacfafbc77cebe"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.670124 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9792ded106488269b52844056dd1b2e9d47a61d8fc8ac11b8e875d095bdcf100"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.670131 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.673759 5125 generic.go:358] "Generic (PLEG): container finished" podID="48d0e864-6620-4a75-baa4-8653836f3aab" containerID="b20b0a9605f05d0adc59fb9552e2669c3781c6b2a3e5d64103d79ca5707cf336" exitCode=0 Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.673777 5125 generic.go:358] "Generic (PLEG): container finished" podID="48d0e864-6620-4a75-baa4-8653836f3aab" containerID="16e1ad7ce234905f668415641ca07de1f1c979cfa934d9f44009b0809d0096a9" exitCode=0 Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.673819 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.673848 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" event={"ID":"48d0e864-6620-4a75-baa4-8653836f3aab","Type":"ContainerDied","Data":"b20b0a9605f05d0adc59fb9552e2669c3781c6b2a3e5d64103d79ca5707cf336"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.673873 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b20b0a9605f05d0adc59fb9552e2669c3781c6b2a3e5d64103d79ca5707cf336"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.673883 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"16e1ad7ce234905f668415641ca07de1f1c979cfa934d9f44009b0809d0096a9"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.673894 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" event={"ID":"48d0e864-6620-4a75-baa4-8653836f3aab","Type":"ContainerDied","Data":"16e1ad7ce234905f668415641ca07de1f1c979cfa934d9f44009b0809d0096a9"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.673904 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b20b0a9605f05d0adc59fb9552e2669c3781c6b2a3e5d64103d79ca5707cf336"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.673912 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"16e1ad7ce234905f668415641ca07de1f1c979cfa934d9f44009b0809d0096a9"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.673922 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx" event={"ID":"48d0e864-6620-4a75-baa4-8653836f3aab","Type":"ContainerDied","Data":"6c72e721e2a8d7fcc34cc083b0dbe02e8e032b636028e0a263c07f2463f10d25"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.673931 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b20b0a9605f05d0adc59fb9552e2669c3781c6b2a3e5d64103d79ca5707cf336"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.673939 5125 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"16e1ad7ce234905f668415641ca07de1f1c979cfa934d9f44009b0809d0096a9"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.679441 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-b869m" event={"ID":"3aaa7c67-0452-440f-8998-6ffa475eff9f","Type":"ContainerStarted","Data":"8b4c8165a8806b78121db21909817c7e2207f361d37f932476687ef38f9a3ec5"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.679473 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-b869m" event={"ID":"3aaa7c67-0452-440f-8998-6ffa475eff9f","Type":"ContainerStarted","Data":"578be5224fd48c5233c2809ba482b553cd8d1d11d5cbbedf65f70fe09705d06f"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.679483 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-b869m" event={"ID":"3aaa7c67-0452-440f-8998-6ffa475eff9f","Type":"ContainerStarted","Data":"6ca310bd1d7e3e3294e0582f3e071341bd0ae4ebf08a487e6c8c47a89c3697d1"} Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.692840 5125 scope.go:117] "RemoveContainer" containerID="b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.717418 5125 scope.go:117] "RemoveContainer" containerID="6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.718499 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-host-kubelet\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.718582 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-run-ovn\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.718630 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-systemd-units\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.718663 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1ef3a1cc-5a78-48f2-929e-e7effe11f365-ovnkube-config\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.718704 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1ef3a1cc-5a78-48f2-929e-e7effe11f365-ovnkube-script-lib\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.718872 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-host-run-ovn-kubernetes\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.718964 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.718997 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1ef3a1cc-5a78-48f2-929e-e7effe11f365-ovn-node-metrics-cert\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.719059 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-host-cni-netd\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.719129 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-log-socket\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.719169 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-etc-openvswitch\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.719225 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh9xv\" (UniqueName: \"kubernetes.io/projected/1ef3a1cc-5a78-48f2-929e-e7effe11f365-kube-api-access-qh9xv\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.719288 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-node-log\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.719321 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-run-systemd\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.719376 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1ef3a1cc-5a78-48f2-929e-e7effe11f365-env-overrides\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.719414 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-run-openvswitch\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.719470 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-var-lib-openvswitch\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.719845 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-host-cni-bin\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.720173 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-host-run-netns\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.720367 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-host-slash\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.720732 5125 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.720777 5125 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aabf1825-0c19-45de-9f9e-fe94777752e6-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.720793 5125 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.720811 5125 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aabf1825-0c19-45de-9f9e-fe94777752e6-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.720842 5125 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aabf1825-0c19-45de-9f9e-fe94777752e6-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.720855 5125 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aabf1825-0c19-45de-9f9e-fe94777752e6-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.720870 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-42xvf\" (UniqueName: \"kubernetes.io/projected/aabf1825-0c19-45de-9f9e-fe94777752e6-kube-api-access-42xvf\") on node \"crc\" DevicePath \"\"" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.734001 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-b869m" podStartSLOduration=1.733982844 podStartE2EDuration="1.733982844s" podCreationTimestamp="2025-12-08 19:39:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:39:05.699197614 +0000 UTC m=+602.469687898" watchObservedRunningTime="2025-12-08 19:39:05.733982844 +0000 UTC m=+602.504473118" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.751554 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-k9whn"] Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.754700 5125 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-k9whn"] Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.767016 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx"] Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.775596 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aabf1825-0c19-45de-9f9e-fe94777752e6" path="/var/lib/kubelet/pods/aabf1825-0c19-45de-9f9e-fe94777752e6/volumes" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.777191 5125 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-w8mbx"] Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.782991 5125 scope.go:117] "RemoveContainer" containerID="851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.812781 5125 scope.go:117] "RemoveContainer" containerID="3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.821346 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-var-lib-openvswitch\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.821378 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-host-cni-bin\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.821395 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-host-run-netns\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.821427 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-host-slash\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.821464 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-host-kubelet\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.821481 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-run-ovn\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.821497 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-systemd-units\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.821512 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1ef3a1cc-5a78-48f2-929e-e7effe11f365-ovnkube-config\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.821528 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1ef3a1cc-5a78-48f2-929e-e7effe11f365-ovnkube-script-lib\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.821544 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-host-run-ovn-kubernetes\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.821578 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.821593 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1ef3a1cc-5a78-48f2-929e-e7effe11f365-ovn-node-metrics-cert\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.821631 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-host-cni-netd\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.821675 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-log-socket\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.821696 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-etc-openvswitch\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.821719 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qh9xv\" (UniqueName: \"kubernetes.io/projected/1ef3a1cc-5a78-48f2-929e-e7effe11f365-kube-api-access-qh9xv\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.821745 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-node-log\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.821764 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-run-systemd\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.821780 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1ef3a1cc-5a78-48f2-929e-e7effe11f365-env-overrides\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.821800 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-run-openvswitch\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.821858 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-run-openvswitch\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.821890 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-var-lib-openvswitch\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.821910 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-host-cni-bin\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.821929 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-host-run-netns\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.822177 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-host-slash\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.822783 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-host-cni-netd\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.822809 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-run-ovn\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.822833 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-node-log\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.822853 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-host-kubelet\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.822863 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-run-systemd\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.822899 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-systemd-units\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.823226 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-host-run-ovn-kubernetes\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.823459 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-log-socket\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.823453 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-etc-openvswitch\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.823495 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1ef3a1cc-5a78-48f2-929e-e7effe11f365-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.823679 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1ef3a1cc-5a78-48f2-929e-e7effe11f365-ovnkube-config\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.823972 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1ef3a1cc-5a78-48f2-929e-e7effe11f365-ovnkube-script-lib\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.824890 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1ef3a1cc-5a78-48f2-929e-e7effe11f365-env-overrides\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.830832 5125 scope.go:117] "RemoveContainer" containerID="36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.834432 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1ef3a1cc-5a78-48f2-929e-e7effe11f365-ovn-node-metrics-cert\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.838774 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh9xv\" (UniqueName: \"kubernetes.io/projected/1ef3a1cc-5a78-48f2-929e-e7effe11f365-kube-api-access-qh9xv\") pod \"ovnkube-node-5xns9\" (UID: \"1ef3a1cc-5a78-48f2-929e-e7effe11f365\") " pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.866420 5125 scope.go:117] "RemoveContainer" containerID="f2f2e6b44b7da40680601e09cfc2ac282135d38bd2cc2a03bdbacfafbc77cebe" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.878218 5125 scope.go:117] "RemoveContainer" containerID="9792ded106488269b52844056dd1b2e9d47a61d8fc8ac11b8e875d095bdcf100" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.887937 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.902773 5125 scope.go:117] "RemoveContainer" containerID="79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.926315 5125 scope.go:117] "RemoveContainer" containerID="7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41" Dec 08 19:39:05 crc kubenswrapper[5125]: E1208 19:39:05.930104 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41\": container with ID starting with 7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41 not found: ID does not exist" containerID="7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.930146 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41"} err="failed to get container status \"7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41\": rpc error: code = NotFound desc = could not find container \"7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41\": container with ID starting with 7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41 not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.930171 5125 scope.go:117] "RemoveContainer" containerID="b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1" Dec 08 19:39:05 crc kubenswrapper[5125]: E1208 19:39:05.930630 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1\": container with ID starting with b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1 not found: ID does not exist" containerID="b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.930649 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1"} err="failed to get container status \"b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1\": rpc error: code = NotFound desc = could not find container \"b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1\": container with ID starting with b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1 not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.930665 5125 scope.go:117] "RemoveContainer" containerID="6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa" Dec 08 19:39:05 crc kubenswrapper[5125]: E1208 19:39:05.930989 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa\": container with ID starting with 6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa not found: ID does not exist" containerID="6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.931033 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa"} err="failed to get container status \"6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa\": rpc error: code = NotFound desc = could not find container \"6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa\": container with ID starting with 6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.931056 5125 scope.go:117] "RemoveContainer" containerID="851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3" Dec 08 19:39:05 crc kubenswrapper[5125]: E1208 19:39:05.931534 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3\": container with ID starting with 851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3 not found: ID does not exist" containerID="851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.931676 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3"} err="failed to get container status \"851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3\": rpc error: code = NotFound desc = could not find container \"851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3\": container with ID starting with 851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3 not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.931765 5125 scope.go:117] "RemoveContainer" containerID="3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea" Dec 08 19:39:05 crc kubenswrapper[5125]: E1208 19:39:05.932067 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea\": container with ID starting with 3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea not found: ID does not exist" containerID="3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.932143 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea"} err="failed to get container status \"3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea\": rpc error: code = NotFound desc = could not find container \"3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea\": container with ID starting with 3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.932200 5125 scope.go:117] "RemoveContainer" containerID="36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9" Dec 08 19:39:05 crc kubenswrapper[5125]: E1208 19:39:05.932478 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9\": container with ID starting with 36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9 not found: ID does not exist" containerID="36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.932573 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9"} err="failed to get container status \"36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9\": rpc error: code = NotFound desc = could not find container \"36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9\": container with ID starting with 36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9 not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.932671 5125 scope.go:117] "RemoveContainer" containerID="f2f2e6b44b7da40680601e09cfc2ac282135d38bd2cc2a03bdbacfafbc77cebe" Dec 08 19:39:05 crc kubenswrapper[5125]: E1208 19:39:05.934217 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2f2e6b44b7da40680601e09cfc2ac282135d38bd2cc2a03bdbacfafbc77cebe\": container with ID starting with f2f2e6b44b7da40680601e09cfc2ac282135d38bd2cc2a03bdbacfafbc77cebe not found: ID does not exist" containerID="f2f2e6b44b7da40680601e09cfc2ac282135d38bd2cc2a03bdbacfafbc77cebe" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.934246 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2f2e6b44b7da40680601e09cfc2ac282135d38bd2cc2a03bdbacfafbc77cebe"} err="failed to get container status \"f2f2e6b44b7da40680601e09cfc2ac282135d38bd2cc2a03bdbacfafbc77cebe\": rpc error: code = NotFound desc = could not find container \"f2f2e6b44b7da40680601e09cfc2ac282135d38bd2cc2a03bdbacfafbc77cebe\": container with ID starting with f2f2e6b44b7da40680601e09cfc2ac282135d38bd2cc2a03bdbacfafbc77cebe not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.934260 5125 scope.go:117] "RemoveContainer" containerID="9792ded106488269b52844056dd1b2e9d47a61d8fc8ac11b8e875d095bdcf100" Dec 08 19:39:05 crc kubenswrapper[5125]: E1208 19:39:05.934482 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9792ded106488269b52844056dd1b2e9d47a61d8fc8ac11b8e875d095bdcf100\": container with ID starting with 9792ded106488269b52844056dd1b2e9d47a61d8fc8ac11b8e875d095bdcf100 not found: ID does not exist" containerID="9792ded106488269b52844056dd1b2e9d47a61d8fc8ac11b8e875d095bdcf100" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.934508 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9792ded106488269b52844056dd1b2e9d47a61d8fc8ac11b8e875d095bdcf100"} err="failed to get container status \"9792ded106488269b52844056dd1b2e9d47a61d8fc8ac11b8e875d095bdcf100\": rpc error: code = NotFound desc = could not find container \"9792ded106488269b52844056dd1b2e9d47a61d8fc8ac11b8e875d095bdcf100\": container with ID starting with 9792ded106488269b52844056dd1b2e9d47a61d8fc8ac11b8e875d095bdcf100 not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.934523 5125 scope.go:117] "RemoveContainer" containerID="79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8" Dec 08 19:39:05 crc kubenswrapper[5125]: E1208 19:39:05.934731 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8\": container with ID starting with 79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8 not found: ID does not exist" containerID="79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.934760 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8"} err="failed to get container status \"79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8\": rpc error: code = NotFound desc = could not find container \"79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8\": container with ID starting with 79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8 not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.934778 5125 scope.go:117] "RemoveContainer" containerID="7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.935025 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41"} err="failed to get container status \"7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41\": rpc error: code = NotFound desc = could not find container \"7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41\": container with ID starting with 7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41 not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.935047 5125 scope.go:117] "RemoveContainer" containerID="b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.935252 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1"} err="failed to get container status \"b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1\": rpc error: code = NotFound desc = could not find container \"b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1\": container with ID starting with b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1 not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.935270 5125 scope.go:117] "RemoveContainer" containerID="6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.935566 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa"} err="failed to get container status \"6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa\": rpc error: code = NotFound desc = could not find container \"6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa\": container with ID starting with 6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.935588 5125 scope.go:117] "RemoveContainer" containerID="851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.935781 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3"} err="failed to get container status \"851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3\": rpc error: code = NotFound desc = could not find container \"851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3\": container with ID starting with 851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3 not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.935799 5125 scope.go:117] "RemoveContainer" containerID="3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.935986 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea"} err="failed to get container status \"3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea\": rpc error: code = NotFound desc = could not find container \"3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea\": container with ID starting with 3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.936003 5125 scope.go:117] "RemoveContainer" containerID="36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.936130 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9"} err="failed to get container status \"36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9\": rpc error: code = NotFound desc = could not find container \"36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9\": container with ID starting with 36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9 not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.936146 5125 scope.go:117] "RemoveContainer" containerID="f2f2e6b44b7da40680601e09cfc2ac282135d38bd2cc2a03bdbacfafbc77cebe" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.936262 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2f2e6b44b7da40680601e09cfc2ac282135d38bd2cc2a03bdbacfafbc77cebe"} err="failed to get container status \"f2f2e6b44b7da40680601e09cfc2ac282135d38bd2cc2a03bdbacfafbc77cebe\": rpc error: code = NotFound desc = could not find container \"f2f2e6b44b7da40680601e09cfc2ac282135d38bd2cc2a03bdbacfafbc77cebe\": container with ID starting with f2f2e6b44b7da40680601e09cfc2ac282135d38bd2cc2a03bdbacfafbc77cebe not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.936279 5125 scope.go:117] "RemoveContainer" containerID="9792ded106488269b52844056dd1b2e9d47a61d8fc8ac11b8e875d095bdcf100" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.936434 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9792ded106488269b52844056dd1b2e9d47a61d8fc8ac11b8e875d095bdcf100"} err="failed to get container status \"9792ded106488269b52844056dd1b2e9d47a61d8fc8ac11b8e875d095bdcf100\": rpc error: code = NotFound desc = could not find container \"9792ded106488269b52844056dd1b2e9d47a61d8fc8ac11b8e875d095bdcf100\": container with ID starting with 9792ded106488269b52844056dd1b2e9d47a61d8fc8ac11b8e875d095bdcf100 not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.936452 5125 scope.go:117] "RemoveContainer" containerID="79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.936633 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8"} err="failed to get container status \"79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8\": rpc error: code = NotFound desc = could not find container \"79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8\": container with ID starting with 79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8 not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.936653 5125 scope.go:117] "RemoveContainer" containerID="7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.936805 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41"} err="failed to get container status \"7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41\": rpc error: code = NotFound desc = could not find container \"7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41\": container with ID starting with 7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41 not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.936824 5125 scope.go:117] "RemoveContainer" containerID="b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.936968 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1"} err="failed to get container status \"b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1\": rpc error: code = NotFound desc = could not find container \"b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1\": container with ID starting with b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1 not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.936994 5125 scope.go:117] "RemoveContainer" containerID="6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.937113 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa"} err="failed to get container status \"6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa\": rpc error: code = NotFound desc = could not find container \"6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa\": container with ID starting with 6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.937127 5125 scope.go:117] "RemoveContainer" containerID="851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.937279 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3"} err="failed to get container status \"851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3\": rpc error: code = NotFound desc = could not find container \"851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3\": container with ID starting with 851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3 not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.937296 5125 scope.go:117] "RemoveContainer" containerID="3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.937508 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea"} err="failed to get container status \"3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea\": rpc error: code = NotFound desc = could not find container \"3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea\": container with ID starting with 3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.937525 5125 scope.go:117] "RemoveContainer" containerID="36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.937727 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9"} err="failed to get container status \"36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9\": rpc error: code = NotFound desc = could not find container \"36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9\": container with ID starting with 36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9 not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.937740 5125 scope.go:117] "RemoveContainer" containerID="f2f2e6b44b7da40680601e09cfc2ac282135d38bd2cc2a03bdbacfafbc77cebe" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.937910 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2f2e6b44b7da40680601e09cfc2ac282135d38bd2cc2a03bdbacfafbc77cebe"} err="failed to get container status \"f2f2e6b44b7da40680601e09cfc2ac282135d38bd2cc2a03bdbacfafbc77cebe\": rpc error: code = NotFound desc = could not find container \"f2f2e6b44b7da40680601e09cfc2ac282135d38bd2cc2a03bdbacfafbc77cebe\": container with ID starting with f2f2e6b44b7da40680601e09cfc2ac282135d38bd2cc2a03bdbacfafbc77cebe not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.937928 5125 scope.go:117] "RemoveContainer" containerID="9792ded106488269b52844056dd1b2e9d47a61d8fc8ac11b8e875d095bdcf100" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.938102 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9792ded106488269b52844056dd1b2e9d47a61d8fc8ac11b8e875d095bdcf100"} err="failed to get container status \"9792ded106488269b52844056dd1b2e9d47a61d8fc8ac11b8e875d095bdcf100\": rpc error: code = NotFound desc = could not find container \"9792ded106488269b52844056dd1b2e9d47a61d8fc8ac11b8e875d095bdcf100\": container with ID starting with 9792ded106488269b52844056dd1b2e9d47a61d8fc8ac11b8e875d095bdcf100 not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.938122 5125 scope.go:117] "RemoveContainer" containerID="79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.938285 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8"} err="failed to get container status \"79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8\": rpc error: code = NotFound desc = could not find container \"79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8\": container with ID starting with 79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8 not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.938304 5125 scope.go:117] "RemoveContainer" containerID="7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.940048 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41"} err="failed to get container status \"7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41\": rpc error: code = NotFound desc = could not find container \"7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41\": container with ID starting with 7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41 not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.940093 5125 scope.go:117] "RemoveContainer" containerID="b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.940346 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1"} err="failed to get container status \"b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1\": rpc error: code = NotFound desc = could not find container \"b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1\": container with ID starting with b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1 not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.940364 5125 scope.go:117] "RemoveContainer" containerID="6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.940757 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa"} err="failed to get container status \"6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa\": rpc error: code = NotFound desc = could not find container \"6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa\": container with ID starting with 6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.940781 5125 scope.go:117] "RemoveContainer" containerID="851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.946903 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3"} err="failed to get container status \"851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3\": rpc error: code = NotFound desc = could not find container \"851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3\": container with ID starting with 851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3 not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.946932 5125 scope.go:117] "RemoveContainer" containerID="3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.947141 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea"} err="failed to get container status \"3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea\": rpc error: code = NotFound desc = could not find container \"3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea\": container with ID starting with 3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.947164 5125 scope.go:117] "RemoveContainer" containerID="36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.947379 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9"} err="failed to get container status \"36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9\": rpc error: code = NotFound desc = could not find container \"36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9\": container with ID starting with 36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9 not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.947407 5125 scope.go:117] "RemoveContainer" containerID="f2f2e6b44b7da40680601e09cfc2ac282135d38bd2cc2a03bdbacfafbc77cebe" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.947600 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2f2e6b44b7da40680601e09cfc2ac282135d38bd2cc2a03bdbacfafbc77cebe"} err="failed to get container status \"f2f2e6b44b7da40680601e09cfc2ac282135d38bd2cc2a03bdbacfafbc77cebe\": rpc error: code = NotFound desc = could not find container \"f2f2e6b44b7da40680601e09cfc2ac282135d38bd2cc2a03bdbacfafbc77cebe\": container with ID starting with f2f2e6b44b7da40680601e09cfc2ac282135d38bd2cc2a03bdbacfafbc77cebe not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.947641 5125 scope.go:117] "RemoveContainer" containerID="9792ded106488269b52844056dd1b2e9d47a61d8fc8ac11b8e875d095bdcf100" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.947870 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9792ded106488269b52844056dd1b2e9d47a61d8fc8ac11b8e875d095bdcf100"} err="failed to get container status \"9792ded106488269b52844056dd1b2e9d47a61d8fc8ac11b8e875d095bdcf100\": rpc error: code = NotFound desc = could not find container \"9792ded106488269b52844056dd1b2e9d47a61d8fc8ac11b8e875d095bdcf100\": container with ID starting with 9792ded106488269b52844056dd1b2e9d47a61d8fc8ac11b8e875d095bdcf100 not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.947893 5125 scope.go:117] "RemoveContainer" containerID="79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.949815 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8"} err="failed to get container status \"79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8\": rpc error: code = NotFound desc = could not find container \"79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8\": container with ID starting with 79f926815b3c7b9ed801ce200da2b1dc7b3cd3c8255d2c08269a8cfa0404c6e8 not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.949840 5125 scope.go:117] "RemoveContainer" containerID="7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.950015 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41"} err="failed to get container status \"7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41\": rpc error: code = NotFound desc = could not find container \"7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41\": container with ID starting with 7b0b6f0d68dc45d03f38fa5c3b37106038afea63d947e2e13b33800207613c41 not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.950038 5125 scope.go:117] "RemoveContainer" containerID="b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.950181 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1"} err="failed to get container status \"b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1\": rpc error: code = NotFound desc = could not find container \"b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1\": container with ID starting with b174cb1e9f8a4470b0ccf00c194cd8703068d2927af78eac74163c51ba4a60f1 not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.950207 5125 scope.go:117] "RemoveContainer" containerID="6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.950341 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa"} err="failed to get container status \"6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa\": rpc error: code = NotFound desc = could not find container \"6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa\": container with ID starting with 6a40b6881b03838f0d5d86720835287d7877c1383f321a9098bb07cd91b4cafa not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.950362 5125 scope.go:117] "RemoveContainer" containerID="851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.950526 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3"} err="failed to get container status \"851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3\": rpc error: code = NotFound desc = could not find container \"851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3\": container with ID starting with 851420b7644d0d49fba8f7cda2903caae42e51122b9eef2152e9f9ca4437b8c3 not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.950548 5125 scope.go:117] "RemoveContainer" containerID="3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.952820 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea"} err="failed to get container status \"3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea\": rpc error: code = NotFound desc = could not find container \"3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea\": container with ID starting with 3a87fb12609166d53c2598375bd1507b67a3b8f2df95c7c5fdf7bad4a4ce34ea not found: ID does not exist" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.952841 5125 scope.go:117] "RemoveContainer" containerID="36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9" Dec 08 19:39:05 crc kubenswrapper[5125]: I1208 19:39:05.953029 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9"} err="failed to get container status \"36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9\": rpc error: code = NotFound desc = could not find container \"36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9\": container with ID starting with 36ac66da02e97cb0adcc8889b80f48b74393c5a99b1e3bb583a3065310f89da9 not found: ID does not exist" Dec 08 19:39:06 crc kubenswrapper[5125]: I1208 19:39:06.688864 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9p7g8_b938d768-ccce-45a6-a982-3f5d6f1a7d98/kube-multus/0.log" Dec 08 19:39:06 crc kubenswrapper[5125]: I1208 19:39:06.688990 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9p7g8" event={"ID":"b938d768-ccce-45a6-a982-3f5d6f1a7d98","Type":"ContainerStarted","Data":"4b253132aba9e35d728a6a7fe77dca96ba0b2a6f0765011e63e68199cdfff6ac"} Dec 08 19:39:06 crc kubenswrapper[5125]: I1208 19:39:06.692287 5125 generic.go:358] "Generic (PLEG): container finished" podID="1ef3a1cc-5a78-48f2-929e-e7effe11f365" containerID="d1819526d17cc46232d53fe53af3c3ff48aa8061b150a29737de0cca41a0acb1" exitCode=0 Dec 08 19:39:06 crc kubenswrapper[5125]: I1208 19:39:06.692420 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" event={"ID":"1ef3a1cc-5a78-48f2-929e-e7effe11f365","Type":"ContainerDied","Data":"d1819526d17cc46232d53fe53af3c3ff48aa8061b150a29737de0cca41a0acb1"} Dec 08 19:39:06 crc kubenswrapper[5125]: I1208 19:39:06.692476 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" event={"ID":"1ef3a1cc-5a78-48f2-929e-e7effe11f365","Type":"ContainerStarted","Data":"1d18a94f85897813e3d30e1ee8fce1dc32e5ca966121fd14046701ce07a7dad3"} Dec 08 19:39:07 crc kubenswrapper[5125]: I1208 19:39:07.703131 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" event={"ID":"1ef3a1cc-5a78-48f2-929e-e7effe11f365","Type":"ContainerStarted","Data":"bca8e4583adc6a7f34247778f9dddd817fba58e15e29b4f251d3c5ecb912cec1"} Dec 08 19:39:07 crc kubenswrapper[5125]: I1208 19:39:07.703197 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" event={"ID":"1ef3a1cc-5a78-48f2-929e-e7effe11f365","Type":"ContainerStarted","Data":"cf485f4a08498d918aa3265c10039dacf1e54e2c92787ac3a63cbd1d33c61921"} Dec 08 19:39:07 crc kubenswrapper[5125]: I1208 19:39:07.703218 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" event={"ID":"1ef3a1cc-5a78-48f2-929e-e7effe11f365","Type":"ContainerStarted","Data":"1fdc0450b3dd330ecce63b465e812827aa2d8bf114798b9c03dde4a4eb1178ac"} Dec 08 19:39:07 crc kubenswrapper[5125]: I1208 19:39:07.703237 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" event={"ID":"1ef3a1cc-5a78-48f2-929e-e7effe11f365","Type":"ContainerStarted","Data":"559b9d139581aa00ffcc545adbbb273a83374596c63eb741a2b72bf8e8c3de5f"} Dec 08 19:39:07 crc kubenswrapper[5125]: I1208 19:39:07.703254 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" event={"ID":"1ef3a1cc-5a78-48f2-929e-e7effe11f365","Type":"ContainerStarted","Data":"7c3e7d29d325f719f7c2b1268337a612f4c8223ced8c886cce14727a0b32a22b"} Dec 08 19:39:07 crc kubenswrapper[5125]: I1208 19:39:07.703272 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" event={"ID":"1ef3a1cc-5a78-48f2-929e-e7effe11f365","Type":"ContainerStarted","Data":"234f09b0d01f468449e4637d7453aa7a707bacf2732d2c0a310f8cbfe94348fa"} Dec 08 19:39:07 crc kubenswrapper[5125]: I1208 19:39:07.779392 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48d0e864-6620-4a75-baa4-8653836f3aab" path="/var/lib/kubelet/pods/48d0e864-6620-4a75-baa4-8653836f3aab/volumes" Dec 08 19:39:09 crc kubenswrapper[5125]: I1208 19:39:09.719111 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" event={"ID":"1ef3a1cc-5a78-48f2-929e-e7effe11f365","Type":"ContainerStarted","Data":"3f23e818dbbe5dd3ae13c662870bd69ec557b64e51a12f926eed1a8a37c3d128"} Dec 08 19:39:12 crc kubenswrapper[5125]: I1208 19:39:12.743966 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" event={"ID":"1ef3a1cc-5a78-48f2-929e-e7effe11f365","Type":"ContainerStarted","Data":"e84837d8c30d0b304a4f1b061222b08976ee461c9001795429a346ca9002ad6f"} Dec 08 19:39:12 crc kubenswrapper[5125]: I1208 19:39:12.746025 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:12 crc kubenswrapper[5125]: I1208 19:39:12.746060 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:12 crc kubenswrapper[5125]: I1208 19:39:12.746112 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:12 crc kubenswrapper[5125]: I1208 19:39:12.779224 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:12 crc kubenswrapper[5125]: I1208 19:39:12.781650 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:12 crc kubenswrapper[5125]: I1208 19:39:12.783245 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" podStartSLOduration=7.783237316 podStartE2EDuration="7.783237316s" podCreationTimestamp="2025-12-08 19:39:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:39:12.778809934 +0000 UTC m=+609.549300208" watchObservedRunningTime="2025-12-08 19:39:12.783237316 +0000 UTC m=+609.553727580" Dec 08 19:39:12 crc kubenswrapper[5125]: I1208 19:39:12.912437 5125 ???:1] "http: TLS handshake error from 192.168.126.11:59968: no serving certificate available for the kubelet" Dec 08 19:39:21 crc kubenswrapper[5125]: I1208 19:39:21.101177 5125 patch_prober.go:28] interesting pod/machine-config-daemon-slhjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:39:21 crc kubenswrapper[5125]: I1208 19:39:21.101915 5125 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" podUID="d8cea827-b8e3-4d92-adea-df0afd2397da" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:39:44 crc kubenswrapper[5125]: I1208 19:39:44.778617 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5xns9" Dec 08 19:39:51 crc kubenswrapper[5125]: I1208 19:39:51.101310 5125 patch_prober.go:28] interesting pod/machine-config-daemon-slhjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:39:51 crc kubenswrapper[5125]: I1208 19:39:51.101657 5125 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" podUID="d8cea827-b8e3-4d92-adea-df0afd2397da" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:39:51 crc kubenswrapper[5125]: I1208 19:39:51.101708 5125 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" Dec 08 19:39:51 crc kubenswrapper[5125]: I1208 19:39:51.102301 5125 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3eaff9ff574646a35fa068c19d68106caffff9d6e28141d09b7049a7e34edb72"} pod="openshift-machine-config-operator/machine-config-daemon-slhjr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 19:39:51 crc kubenswrapper[5125]: I1208 19:39:51.102637 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" podUID="d8cea827-b8e3-4d92-adea-df0afd2397da" containerName="machine-config-daemon" containerID="cri-o://3eaff9ff574646a35fa068c19d68106caffff9d6e28141d09b7049a7e34edb72" gracePeriod=600 Dec 08 19:39:51 crc kubenswrapper[5125]: I1208 19:39:51.989133 5125 generic.go:358] "Generic (PLEG): container finished" podID="d8cea827-b8e3-4d92-adea-df0afd2397da" containerID="3eaff9ff574646a35fa068c19d68106caffff9d6e28141d09b7049a7e34edb72" exitCode=0 Dec 08 19:39:51 crc kubenswrapper[5125]: I1208 19:39:51.989195 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" event={"ID":"d8cea827-b8e3-4d92-adea-df0afd2397da","Type":"ContainerDied","Data":"3eaff9ff574646a35fa068c19d68106caffff9d6e28141d09b7049a7e34edb72"} Dec 08 19:39:51 crc kubenswrapper[5125]: I1208 19:39:51.989709 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" event={"ID":"d8cea827-b8e3-4d92-adea-df0afd2397da","Type":"ContainerStarted","Data":"f9eb1c7e5f36182d845fb8ea13653363a63738eedc2b7b6ae1600d40f21292c7"} Dec 08 19:39:51 crc kubenswrapper[5125]: I1208 19:39:51.989750 5125 scope.go:117] "RemoveContainer" containerID="47c3c7b274e1f8fb2e42d6843b6c70142b9720f62299f0a9859e9a777dd9f1a9" Dec 08 19:40:04 crc kubenswrapper[5125]: I1208 19:40:04.160074 5125 scope.go:117] "RemoveContainer" containerID="b20b0a9605f05d0adc59fb9552e2669c3781c6b2a3e5d64103d79ca5707cf336" Dec 08 19:40:04 crc kubenswrapper[5125]: I1208 19:40:04.180377 5125 scope.go:117] "RemoveContainer" containerID="16e1ad7ce234905f668415641ca07de1f1c979cfa934d9f44009b0809d0096a9" Dec 08 19:40:15 crc kubenswrapper[5125]: I1208 19:40:15.858970 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zc9m8"] Dec 08 19:40:15 crc kubenswrapper[5125]: I1208 19:40:15.859967 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zc9m8" podUID="45f623b6-715e-49bc-a570-1bd15effb4f5" containerName="registry-server" containerID="cri-o://82365a532581dbff147b4fecbde17df6ef597ce16c4d2af233e94ba7124566d5" gracePeriod=30 Dec 08 19:40:16 crc kubenswrapper[5125]: I1208 19:40:16.171178 5125 generic.go:358] "Generic (PLEG): container finished" podID="45f623b6-715e-49bc-a570-1bd15effb4f5" containerID="82365a532581dbff147b4fecbde17df6ef597ce16c4d2af233e94ba7124566d5" exitCode=0 Dec 08 19:40:16 crc kubenswrapper[5125]: I1208 19:40:16.171252 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zc9m8" event={"ID":"45f623b6-715e-49bc-a570-1bd15effb4f5","Type":"ContainerDied","Data":"82365a532581dbff147b4fecbde17df6ef597ce16c4d2af233e94ba7124566d5"} Dec 08 19:40:16 crc kubenswrapper[5125]: I1208 19:40:16.171585 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zc9m8" event={"ID":"45f623b6-715e-49bc-a570-1bd15effb4f5","Type":"ContainerDied","Data":"2da2208964f0a8f01d3f06d67a34b56e6e3d669055e7344cd030e8f75a17c018"} Dec 08 19:40:16 crc kubenswrapper[5125]: I1208 19:40:16.171598 5125 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2da2208964f0a8f01d3f06d67a34b56e6e3d669055e7344cd030e8f75a17c018" Dec 08 19:40:16 crc kubenswrapper[5125]: I1208 19:40:16.186694 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zc9m8" Dec 08 19:40:16 crc kubenswrapper[5125]: I1208 19:40:16.271350 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45f623b6-715e-49bc-a570-1bd15effb4f5-utilities\") pod \"45f623b6-715e-49bc-a570-1bd15effb4f5\" (UID: \"45f623b6-715e-49bc-a570-1bd15effb4f5\") " Dec 08 19:40:16 crc kubenswrapper[5125]: I1208 19:40:16.271422 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjb7p\" (UniqueName: \"kubernetes.io/projected/45f623b6-715e-49bc-a570-1bd15effb4f5-kube-api-access-sjb7p\") pod \"45f623b6-715e-49bc-a570-1bd15effb4f5\" (UID: \"45f623b6-715e-49bc-a570-1bd15effb4f5\") " Dec 08 19:40:16 crc kubenswrapper[5125]: I1208 19:40:16.271480 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45f623b6-715e-49bc-a570-1bd15effb4f5-catalog-content\") pod \"45f623b6-715e-49bc-a570-1bd15effb4f5\" (UID: \"45f623b6-715e-49bc-a570-1bd15effb4f5\") " Dec 08 19:40:16 crc kubenswrapper[5125]: I1208 19:40:16.272660 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45f623b6-715e-49bc-a570-1bd15effb4f5-utilities" (OuterVolumeSpecName: "utilities") pod "45f623b6-715e-49bc-a570-1bd15effb4f5" (UID: "45f623b6-715e-49bc-a570-1bd15effb4f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:40:16 crc kubenswrapper[5125]: I1208 19:40:16.276796 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45f623b6-715e-49bc-a570-1bd15effb4f5-kube-api-access-sjb7p" (OuterVolumeSpecName: "kube-api-access-sjb7p") pod "45f623b6-715e-49bc-a570-1bd15effb4f5" (UID: "45f623b6-715e-49bc-a570-1bd15effb4f5"). InnerVolumeSpecName "kube-api-access-sjb7p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:40:16 crc kubenswrapper[5125]: I1208 19:40:16.284915 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45f623b6-715e-49bc-a570-1bd15effb4f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45f623b6-715e-49bc-a570-1bd15effb4f5" (UID: "45f623b6-715e-49bc-a570-1bd15effb4f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:40:16 crc kubenswrapper[5125]: I1208 19:40:16.372536 5125 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45f623b6-715e-49bc-a570-1bd15effb4f5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:16 crc kubenswrapper[5125]: I1208 19:40:16.372586 5125 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45f623b6-715e-49bc-a570-1bd15effb4f5-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:16 crc kubenswrapper[5125]: I1208 19:40:16.372598 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sjb7p\" (UniqueName: \"kubernetes.io/projected/45f623b6-715e-49bc-a570-1bd15effb4f5-kube-api-access-sjb7p\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:16 crc kubenswrapper[5125]: I1208 19:40:16.948001 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-rvbzh"] Dec 08 19:40:16 crc kubenswrapper[5125]: I1208 19:40:16.948507 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="45f623b6-715e-49bc-a570-1bd15effb4f5" containerName="extract-content" Dec 08 19:40:16 crc kubenswrapper[5125]: I1208 19:40:16.948520 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="45f623b6-715e-49bc-a570-1bd15effb4f5" containerName="extract-content" Dec 08 19:40:16 crc kubenswrapper[5125]: I1208 19:40:16.948532 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="45f623b6-715e-49bc-a570-1bd15effb4f5" containerName="registry-server" Dec 08 19:40:16 crc kubenswrapper[5125]: I1208 19:40:16.948537 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="45f623b6-715e-49bc-a570-1bd15effb4f5" containerName="registry-server" Dec 08 19:40:16 crc kubenswrapper[5125]: I1208 19:40:16.948547 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="45f623b6-715e-49bc-a570-1bd15effb4f5" containerName="extract-utilities" Dec 08 19:40:16 crc kubenswrapper[5125]: I1208 19:40:16.948555 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="45f623b6-715e-49bc-a570-1bd15effb4f5" containerName="extract-utilities" Dec 08 19:40:16 crc kubenswrapper[5125]: I1208 19:40:16.948652 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="45f623b6-715e-49bc-a570-1bd15effb4f5" containerName="registry-server" Dec 08 19:40:16 crc kubenswrapper[5125]: I1208 19:40:16.955597 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-rvbzh" Dec 08 19:40:16 crc kubenswrapper[5125]: I1208 19:40:16.958749 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-rvbzh"] Dec 08 19:40:17 crc kubenswrapper[5125]: I1208 19:40:17.083634 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/632a3908-f482-44bd-9107-6532cfce7e72-registry-certificates\") pod \"image-registry-5d9d95bf5b-rvbzh\" (UID: \"632a3908-f482-44bd-9107-6532cfce7e72\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rvbzh" Dec 08 19:40:17 crc kubenswrapper[5125]: I1208 19:40:17.083684 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/632a3908-f482-44bd-9107-6532cfce7e72-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-rvbzh\" (UID: \"632a3908-f482-44bd-9107-6532cfce7e72\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rvbzh" Dec 08 19:40:17 crc kubenswrapper[5125]: I1208 19:40:17.083717 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/632a3908-f482-44bd-9107-6532cfce7e72-bound-sa-token\") pod \"image-registry-5d9d95bf5b-rvbzh\" (UID: \"632a3908-f482-44bd-9107-6532cfce7e72\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rvbzh" Dec 08 19:40:17 crc kubenswrapper[5125]: I1208 19:40:17.083825 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/632a3908-f482-44bd-9107-6532cfce7e72-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-rvbzh\" (UID: \"632a3908-f482-44bd-9107-6532cfce7e72\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rvbzh" Dec 08 19:40:17 crc kubenswrapper[5125]: I1208 19:40:17.083919 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/632a3908-f482-44bd-9107-6532cfce7e72-trusted-ca\") pod \"image-registry-5d9d95bf5b-rvbzh\" (UID: \"632a3908-f482-44bd-9107-6532cfce7e72\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rvbzh" Dec 08 19:40:17 crc kubenswrapper[5125]: I1208 19:40:17.083938 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/632a3908-f482-44bd-9107-6532cfce7e72-registry-tls\") pod \"image-registry-5d9d95bf5b-rvbzh\" (UID: \"632a3908-f482-44bd-9107-6532cfce7e72\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rvbzh" Dec 08 19:40:17 crc kubenswrapper[5125]: I1208 19:40:17.084037 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-rvbzh\" (UID: \"632a3908-f482-44bd-9107-6532cfce7e72\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rvbzh" Dec 08 19:40:17 crc kubenswrapper[5125]: I1208 19:40:17.084093 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v68cc\" (UniqueName: \"kubernetes.io/projected/632a3908-f482-44bd-9107-6532cfce7e72-kube-api-access-v68cc\") pod \"image-registry-5d9d95bf5b-rvbzh\" (UID: \"632a3908-f482-44bd-9107-6532cfce7e72\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rvbzh" Dec 08 19:40:17 crc kubenswrapper[5125]: I1208 19:40:17.107748 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-rvbzh\" (UID: \"632a3908-f482-44bd-9107-6532cfce7e72\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rvbzh" Dec 08 19:40:17 crc kubenswrapper[5125]: I1208 19:40:17.177663 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zc9m8" Dec 08 19:40:17 crc kubenswrapper[5125]: I1208 19:40:17.184848 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v68cc\" (UniqueName: \"kubernetes.io/projected/632a3908-f482-44bd-9107-6532cfce7e72-kube-api-access-v68cc\") pod \"image-registry-5d9d95bf5b-rvbzh\" (UID: \"632a3908-f482-44bd-9107-6532cfce7e72\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rvbzh" Dec 08 19:40:17 crc kubenswrapper[5125]: I1208 19:40:17.184890 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/632a3908-f482-44bd-9107-6532cfce7e72-registry-certificates\") pod \"image-registry-5d9d95bf5b-rvbzh\" (UID: \"632a3908-f482-44bd-9107-6532cfce7e72\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rvbzh" Dec 08 19:40:17 crc kubenswrapper[5125]: I1208 19:40:17.184915 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/632a3908-f482-44bd-9107-6532cfce7e72-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-rvbzh\" (UID: \"632a3908-f482-44bd-9107-6532cfce7e72\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rvbzh" Dec 08 19:40:17 crc kubenswrapper[5125]: I1208 19:40:17.184932 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/632a3908-f482-44bd-9107-6532cfce7e72-bound-sa-token\") pod \"image-registry-5d9d95bf5b-rvbzh\" (UID: \"632a3908-f482-44bd-9107-6532cfce7e72\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rvbzh" Dec 08 19:40:17 crc kubenswrapper[5125]: I1208 19:40:17.184960 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/632a3908-f482-44bd-9107-6532cfce7e72-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-rvbzh\" (UID: \"632a3908-f482-44bd-9107-6532cfce7e72\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rvbzh" Dec 08 19:40:17 crc kubenswrapper[5125]: I1208 19:40:17.184992 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/632a3908-f482-44bd-9107-6532cfce7e72-trusted-ca\") pod \"image-registry-5d9d95bf5b-rvbzh\" (UID: \"632a3908-f482-44bd-9107-6532cfce7e72\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rvbzh" Dec 08 19:40:17 crc kubenswrapper[5125]: I1208 19:40:17.185108 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/632a3908-f482-44bd-9107-6532cfce7e72-registry-tls\") pod \"image-registry-5d9d95bf5b-rvbzh\" (UID: \"632a3908-f482-44bd-9107-6532cfce7e72\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rvbzh" Dec 08 19:40:17 crc kubenswrapper[5125]: I1208 19:40:17.185417 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/632a3908-f482-44bd-9107-6532cfce7e72-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-rvbzh\" (UID: \"632a3908-f482-44bd-9107-6532cfce7e72\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rvbzh" Dec 08 19:40:17 crc kubenswrapper[5125]: I1208 19:40:17.186274 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/632a3908-f482-44bd-9107-6532cfce7e72-trusted-ca\") pod \"image-registry-5d9d95bf5b-rvbzh\" (UID: \"632a3908-f482-44bd-9107-6532cfce7e72\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rvbzh" Dec 08 19:40:17 crc kubenswrapper[5125]: I1208 19:40:17.186805 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/632a3908-f482-44bd-9107-6532cfce7e72-registry-certificates\") pod \"image-registry-5d9d95bf5b-rvbzh\" (UID: \"632a3908-f482-44bd-9107-6532cfce7e72\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rvbzh" Dec 08 19:40:17 crc kubenswrapper[5125]: I1208 19:40:17.189924 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/632a3908-f482-44bd-9107-6532cfce7e72-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-rvbzh\" (UID: \"632a3908-f482-44bd-9107-6532cfce7e72\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rvbzh" Dec 08 19:40:17 crc kubenswrapper[5125]: I1208 19:40:17.190137 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/632a3908-f482-44bd-9107-6532cfce7e72-registry-tls\") pod \"image-registry-5d9d95bf5b-rvbzh\" (UID: \"632a3908-f482-44bd-9107-6532cfce7e72\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rvbzh" Dec 08 19:40:17 crc kubenswrapper[5125]: I1208 19:40:17.201192 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/632a3908-f482-44bd-9107-6532cfce7e72-bound-sa-token\") pod \"image-registry-5d9d95bf5b-rvbzh\" (UID: \"632a3908-f482-44bd-9107-6532cfce7e72\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rvbzh" Dec 08 19:40:17 crc kubenswrapper[5125]: I1208 19:40:17.204242 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v68cc\" (UniqueName: \"kubernetes.io/projected/632a3908-f482-44bd-9107-6532cfce7e72-kube-api-access-v68cc\") pod \"image-registry-5d9d95bf5b-rvbzh\" (UID: \"632a3908-f482-44bd-9107-6532cfce7e72\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rvbzh" Dec 08 19:40:17 crc kubenswrapper[5125]: I1208 19:40:17.236370 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zc9m8"] Dec 08 19:40:17 crc kubenswrapper[5125]: I1208 19:40:17.240525 5125 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zc9m8"] Dec 08 19:40:17 crc kubenswrapper[5125]: I1208 19:40:17.273893 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-rvbzh" Dec 08 19:40:17 crc kubenswrapper[5125]: I1208 19:40:17.476356 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-rvbzh"] Dec 08 19:40:17 crc kubenswrapper[5125]: I1208 19:40:17.783315 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45f623b6-715e-49bc-a570-1bd15effb4f5" path="/var/lib/kubelet/pods/45f623b6-715e-49bc-a570-1bd15effb4f5/volumes" Dec 08 19:40:18 crc kubenswrapper[5125]: I1208 19:40:18.185017 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-rvbzh" event={"ID":"632a3908-f482-44bd-9107-6532cfce7e72","Type":"ContainerStarted","Data":"5c019c0ed05cb36e4f85d3a392f09bb24236675b6f6277ec5d1ed51f6fee88ae"} Dec 08 19:40:18 crc kubenswrapper[5125]: I1208 19:40:18.185080 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-rvbzh" event={"ID":"632a3908-f482-44bd-9107-6532cfce7e72","Type":"ContainerStarted","Data":"fbf82db350b6883f8723387ec2ece11fcce6bc60a3bfdd3fe32762e260118826"} Dec 08 19:40:18 crc kubenswrapper[5125]: I1208 19:40:18.213424 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-rvbzh" podStartSLOduration=2.213401584 podStartE2EDuration="2.213401584s" podCreationTimestamp="2025-12-08 19:40:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:40:18.209543038 +0000 UTC m=+674.980033392" watchObservedRunningTime="2025-12-08 19:40:18.213401584 +0000 UTC m=+674.983891868" Dec 08 19:40:19 crc kubenswrapper[5125]: I1208 19:40:19.191684 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-rvbzh" Dec 08 19:40:19 crc kubenswrapper[5125]: I1208 19:40:19.578513 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104mfnx"] Dec 08 19:40:19 crc kubenswrapper[5125]: I1208 19:40:19.592391 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104mfnx"] Dec 08 19:40:19 crc kubenswrapper[5125]: I1208 19:40:19.592562 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104mfnx" Dec 08 19:40:19 crc kubenswrapper[5125]: I1208 19:40:19.610665 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 08 19:40:19 crc kubenswrapper[5125]: I1208 19:40:19.715728 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4pfd\" (UniqueName: \"kubernetes.io/projected/60254d37-51bc-4726-b3f4-23dfd84d4b8f-kube-api-access-f4pfd\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104mfnx\" (UID: \"60254d37-51bc-4726-b3f4-23dfd84d4b8f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104mfnx" Dec 08 19:40:19 crc kubenswrapper[5125]: I1208 19:40:19.715850 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60254d37-51bc-4726-b3f4-23dfd84d4b8f-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104mfnx\" (UID: \"60254d37-51bc-4726-b3f4-23dfd84d4b8f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104mfnx" Dec 08 19:40:19 crc kubenswrapper[5125]: I1208 19:40:19.716062 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60254d37-51bc-4726-b3f4-23dfd84d4b8f-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104mfnx\" (UID: \"60254d37-51bc-4726-b3f4-23dfd84d4b8f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104mfnx" Dec 08 19:40:19 crc kubenswrapper[5125]: I1208 19:40:19.817919 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f4pfd\" (UniqueName: \"kubernetes.io/projected/60254d37-51bc-4726-b3f4-23dfd84d4b8f-kube-api-access-f4pfd\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104mfnx\" (UID: \"60254d37-51bc-4726-b3f4-23dfd84d4b8f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104mfnx" Dec 08 19:40:19 crc kubenswrapper[5125]: I1208 19:40:19.818016 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60254d37-51bc-4726-b3f4-23dfd84d4b8f-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104mfnx\" (UID: \"60254d37-51bc-4726-b3f4-23dfd84d4b8f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104mfnx" Dec 08 19:40:19 crc kubenswrapper[5125]: I1208 19:40:19.818044 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60254d37-51bc-4726-b3f4-23dfd84d4b8f-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104mfnx\" (UID: \"60254d37-51bc-4726-b3f4-23dfd84d4b8f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104mfnx" Dec 08 19:40:19 crc kubenswrapper[5125]: I1208 19:40:19.818861 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60254d37-51bc-4726-b3f4-23dfd84d4b8f-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104mfnx\" (UID: \"60254d37-51bc-4726-b3f4-23dfd84d4b8f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104mfnx" Dec 08 19:40:19 crc kubenswrapper[5125]: I1208 19:40:19.819069 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60254d37-51bc-4726-b3f4-23dfd84d4b8f-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104mfnx\" (UID: \"60254d37-51bc-4726-b3f4-23dfd84d4b8f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104mfnx" Dec 08 19:40:19 crc kubenswrapper[5125]: I1208 19:40:19.851358 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4pfd\" (UniqueName: \"kubernetes.io/projected/60254d37-51bc-4726-b3f4-23dfd84d4b8f-kube-api-access-f4pfd\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104mfnx\" (UID: \"60254d37-51bc-4726-b3f4-23dfd84d4b8f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104mfnx" Dec 08 19:40:19 crc kubenswrapper[5125]: I1208 19:40:19.927968 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104mfnx" Dec 08 19:40:20 crc kubenswrapper[5125]: I1208 19:40:20.167945 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104mfnx"] Dec 08 19:40:20 crc kubenswrapper[5125]: W1208 19:40:20.176857 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60254d37_51bc_4726_b3f4_23dfd84d4b8f.slice/crio-c6e8ba05e2e24f4c7d3d8c29bbe56b032d88e47d0a58f094244651edb83cf314 WatchSource:0}: Error finding container c6e8ba05e2e24f4c7d3d8c29bbe56b032d88e47d0a58f094244651edb83cf314: Status 404 returned error can't find the container with id c6e8ba05e2e24f4c7d3d8c29bbe56b032d88e47d0a58f094244651edb83cf314 Dec 08 19:40:20 crc kubenswrapper[5125]: I1208 19:40:20.196741 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104mfnx" event={"ID":"60254d37-51bc-4726-b3f4-23dfd84d4b8f","Type":"ContainerStarted","Data":"c6e8ba05e2e24f4c7d3d8c29bbe56b032d88e47d0a58f094244651edb83cf314"} Dec 08 19:40:21 crc kubenswrapper[5125]: I1208 19:40:21.203930 5125 generic.go:358] "Generic (PLEG): container finished" podID="60254d37-51bc-4726-b3f4-23dfd84d4b8f" containerID="f73e5be4e1ead19a2b45bbbf6a7a4ac609d818a067b4111043625bdb1d24a7ec" exitCode=0 Dec 08 19:40:21 crc kubenswrapper[5125]: I1208 19:40:21.204100 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104mfnx" event={"ID":"60254d37-51bc-4726-b3f4-23dfd84d4b8f","Type":"ContainerDied","Data":"f73e5be4e1ead19a2b45bbbf6a7a4ac609d818a067b4111043625bdb1d24a7ec"} Dec 08 19:40:23 crc kubenswrapper[5125]: I1208 19:40:23.220933 5125 generic.go:358] "Generic (PLEG): container finished" podID="60254d37-51bc-4726-b3f4-23dfd84d4b8f" containerID="b5a7bd6bd844e02af03da3612c1fd09e6429c251d5a8ace68983366c0e104d64" exitCode=0 Dec 08 19:40:23 crc kubenswrapper[5125]: I1208 19:40:23.221043 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104mfnx" event={"ID":"60254d37-51bc-4726-b3f4-23dfd84d4b8f","Type":"ContainerDied","Data":"b5a7bd6bd844e02af03da3612c1fd09e6429c251d5a8ace68983366c0e104d64"} Dec 08 19:40:24 crc kubenswrapper[5125]: I1208 19:40:24.229740 5125 generic.go:358] "Generic (PLEG): container finished" podID="60254d37-51bc-4726-b3f4-23dfd84d4b8f" containerID="a71cb85ec01a34f56f0b558f36c4bfc44bcdeeb3c87fb90bc1e2cea3795fec51" exitCode=0 Dec 08 19:40:24 crc kubenswrapper[5125]: I1208 19:40:24.229813 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104mfnx" event={"ID":"60254d37-51bc-4726-b3f4-23dfd84d4b8f","Type":"ContainerDied","Data":"a71cb85ec01a34f56f0b558f36c4bfc44bcdeeb3c87fb90bc1e2cea3795fec51"} Dec 08 19:40:25 crc kubenswrapper[5125]: I1208 19:40:25.562953 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104mfnx" Dec 08 19:40:25 crc kubenswrapper[5125]: I1208 19:40:25.598126 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60254d37-51bc-4726-b3f4-23dfd84d4b8f-util\") pod \"60254d37-51bc-4726-b3f4-23dfd84d4b8f\" (UID: \"60254d37-51bc-4726-b3f4-23dfd84d4b8f\") " Dec 08 19:40:25 crc kubenswrapper[5125]: I1208 19:40:25.598185 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60254d37-51bc-4726-b3f4-23dfd84d4b8f-bundle\") pod \"60254d37-51bc-4726-b3f4-23dfd84d4b8f\" (UID: \"60254d37-51bc-4726-b3f4-23dfd84d4b8f\") " Dec 08 19:40:25 crc kubenswrapper[5125]: I1208 19:40:25.598267 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4pfd\" (UniqueName: \"kubernetes.io/projected/60254d37-51bc-4726-b3f4-23dfd84d4b8f-kube-api-access-f4pfd\") pod \"60254d37-51bc-4726-b3f4-23dfd84d4b8f\" (UID: \"60254d37-51bc-4726-b3f4-23dfd84d4b8f\") " Dec 08 19:40:25 crc kubenswrapper[5125]: I1208 19:40:25.623058 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60254d37-51bc-4726-b3f4-23dfd84d4b8f-bundle" (OuterVolumeSpecName: "bundle") pod "60254d37-51bc-4726-b3f4-23dfd84d4b8f" (UID: "60254d37-51bc-4726-b3f4-23dfd84d4b8f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:40:25 crc kubenswrapper[5125]: I1208 19:40:25.633824 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60254d37-51bc-4726-b3f4-23dfd84d4b8f-kube-api-access-f4pfd" (OuterVolumeSpecName: "kube-api-access-f4pfd") pod "60254d37-51bc-4726-b3f4-23dfd84d4b8f" (UID: "60254d37-51bc-4726-b3f4-23dfd84d4b8f"). InnerVolumeSpecName "kube-api-access-f4pfd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:40:25 crc kubenswrapper[5125]: I1208 19:40:25.646308 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60254d37-51bc-4726-b3f4-23dfd84d4b8f-util" (OuterVolumeSpecName: "util") pod "60254d37-51bc-4726-b3f4-23dfd84d4b8f" (UID: "60254d37-51bc-4726-b3f4-23dfd84d4b8f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:40:25 crc kubenswrapper[5125]: I1208 19:40:25.700121 5125 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/60254d37-51bc-4726-b3f4-23dfd84d4b8f-util\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:25 crc kubenswrapper[5125]: I1208 19:40:25.700165 5125 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/60254d37-51bc-4726-b3f4-23dfd84d4b8f-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:25 crc kubenswrapper[5125]: I1208 19:40:25.700180 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f4pfd\" (UniqueName: \"kubernetes.io/projected/60254d37-51bc-4726-b3f4-23dfd84d4b8f-kube-api-access-f4pfd\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:25 crc kubenswrapper[5125]: I1208 19:40:25.976428 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5emjmcd"] Dec 08 19:40:25 crc kubenswrapper[5125]: I1208 19:40:25.977854 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="60254d37-51bc-4726-b3f4-23dfd84d4b8f" containerName="extract" Dec 08 19:40:25 crc kubenswrapper[5125]: I1208 19:40:25.977902 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="60254d37-51bc-4726-b3f4-23dfd84d4b8f" containerName="extract" Dec 08 19:40:25 crc kubenswrapper[5125]: I1208 19:40:25.977997 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="60254d37-51bc-4726-b3f4-23dfd84d4b8f" containerName="pull" Dec 08 19:40:25 crc kubenswrapper[5125]: I1208 19:40:25.978017 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="60254d37-51bc-4726-b3f4-23dfd84d4b8f" containerName="pull" Dec 08 19:40:25 crc kubenswrapper[5125]: I1208 19:40:25.978047 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="60254d37-51bc-4726-b3f4-23dfd84d4b8f" containerName="util" Dec 08 19:40:25 crc kubenswrapper[5125]: I1208 19:40:25.978063 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="60254d37-51bc-4726-b3f4-23dfd84d4b8f" containerName="util" Dec 08 19:40:25 crc kubenswrapper[5125]: I1208 19:40:25.978306 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="60254d37-51bc-4726-b3f4-23dfd84d4b8f" containerName="extract" Dec 08 19:40:26 crc kubenswrapper[5125]: I1208 19:40:26.125941 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5emjmcd"] Dec 08 19:40:26 crc kubenswrapper[5125]: I1208 19:40:26.126149 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5emjmcd" Dec 08 19:40:26 crc kubenswrapper[5125]: I1208 19:40:26.207716 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a97abc6-7fd0-4f78-b973-23a62d4c8f20-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5emjmcd\" (UID: \"7a97abc6-7fd0-4f78-b973-23a62d4c8f20\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5emjmcd" Dec 08 19:40:26 crc kubenswrapper[5125]: I1208 19:40:26.207796 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a97abc6-7fd0-4f78-b973-23a62d4c8f20-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5emjmcd\" (UID: \"7a97abc6-7fd0-4f78-b973-23a62d4c8f20\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5emjmcd" Dec 08 19:40:26 crc kubenswrapper[5125]: I1208 19:40:26.207985 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn888\" (UniqueName: \"kubernetes.io/projected/7a97abc6-7fd0-4f78-b973-23a62d4c8f20-kube-api-access-bn888\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5emjmcd\" (UID: \"7a97abc6-7fd0-4f78-b973-23a62d4c8f20\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5emjmcd" Dec 08 19:40:26 crc kubenswrapper[5125]: I1208 19:40:26.248040 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104mfnx" event={"ID":"60254d37-51bc-4726-b3f4-23dfd84d4b8f","Type":"ContainerDied","Data":"c6e8ba05e2e24f4c7d3d8c29bbe56b032d88e47d0a58f094244651edb83cf314"} Dec 08 19:40:26 crc kubenswrapper[5125]: I1208 19:40:26.248110 5125 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6e8ba05e2e24f4c7d3d8c29bbe56b032d88e47d0a58f094244651edb83cf314" Dec 08 19:40:26 crc kubenswrapper[5125]: I1208 19:40:26.248130 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104mfnx" Dec 08 19:40:26 crc kubenswrapper[5125]: I1208 19:40:26.309531 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a97abc6-7fd0-4f78-b973-23a62d4c8f20-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5emjmcd\" (UID: \"7a97abc6-7fd0-4f78-b973-23a62d4c8f20\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5emjmcd" Dec 08 19:40:26 crc kubenswrapper[5125]: I1208 19:40:26.309657 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a97abc6-7fd0-4f78-b973-23a62d4c8f20-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5emjmcd\" (UID: \"7a97abc6-7fd0-4f78-b973-23a62d4c8f20\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5emjmcd" Dec 08 19:40:26 crc kubenswrapper[5125]: I1208 19:40:26.309857 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bn888\" (UniqueName: \"kubernetes.io/projected/7a97abc6-7fd0-4f78-b973-23a62d4c8f20-kube-api-access-bn888\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5emjmcd\" (UID: \"7a97abc6-7fd0-4f78-b973-23a62d4c8f20\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5emjmcd" Dec 08 19:40:26 crc kubenswrapper[5125]: I1208 19:40:26.310362 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a97abc6-7fd0-4f78-b973-23a62d4c8f20-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5emjmcd\" (UID: \"7a97abc6-7fd0-4f78-b973-23a62d4c8f20\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5emjmcd" Dec 08 19:40:26 crc kubenswrapper[5125]: I1208 19:40:26.310474 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a97abc6-7fd0-4f78-b973-23a62d4c8f20-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5emjmcd\" (UID: \"7a97abc6-7fd0-4f78-b973-23a62d4c8f20\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5emjmcd" Dec 08 19:40:26 crc kubenswrapper[5125]: I1208 19:40:26.347203 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn888\" (UniqueName: \"kubernetes.io/projected/7a97abc6-7fd0-4f78-b973-23a62d4c8f20-kube-api-access-bn888\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5emjmcd\" (UID: \"7a97abc6-7fd0-4f78-b973-23a62d4c8f20\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5emjmcd" Dec 08 19:40:26 crc kubenswrapper[5125]: I1208 19:40:26.452581 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5emjmcd" Dec 08 19:40:26 crc kubenswrapper[5125]: I1208 19:40:26.679477 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5emjmcd"] Dec 08 19:40:26 crc kubenswrapper[5125]: W1208 19:40:26.684785 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a97abc6_7fd0_4f78_b973_23a62d4c8f20.slice/crio-556ae2b43076e85a5ab947d43ce5dbe56ff8eeeee17ea9ce3d88092e2b33a5e3 WatchSource:0}: Error finding container 556ae2b43076e85a5ab947d43ce5dbe56ff8eeeee17ea9ce3d88092e2b33a5e3: Status 404 returned error can't find the container with id 556ae2b43076e85a5ab947d43ce5dbe56ff8eeeee17ea9ce3d88092e2b33a5e3 Dec 08 19:40:27 crc kubenswrapper[5125]: I1208 19:40:27.258866 5125 generic.go:358] "Generic (PLEG): container finished" podID="7a97abc6-7fd0-4f78-b973-23a62d4c8f20" containerID="8578781f396a2fc2e5416b403c1f07e6744aec57b7f3a98d77c62b5fb917796c" exitCode=0 Dec 08 19:40:27 crc kubenswrapper[5125]: I1208 19:40:27.259012 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5emjmcd" event={"ID":"7a97abc6-7fd0-4f78-b973-23a62d4c8f20","Type":"ContainerDied","Data":"8578781f396a2fc2e5416b403c1f07e6744aec57b7f3a98d77c62b5fb917796c"} Dec 08 19:40:27 crc kubenswrapper[5125]: I1208 19:40:27.259071 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5emjmcd" event={"ID":"7a97abc6-7fd0-4f78-b973-23a62d4c8f20","Type":"ContainerStarted","Data":"556ae2b43076e85a5ab947d43ce5dbe56ff8eeeee17ea9ce3d88092e2b33a5e3"} Dec 08 19:40:28 crc kubenswrapper[5125]: I1208 19:40:28.266181 5125 generic.go:358] "Generic (PLEG): container finished" podID="7a97abc6-7fd0-4f78-b973-23a62d4c8f20" containerID="cfe2c493ba7cf624749645825e95924b635aeeac3c1207d0365904843641aeda" exitCode=0 Dec 08 19:40:28 crc kubenswrapper[5125]: I1208 19:40:28.266231 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5emjmcd" event={"ID":"7a97abc6-7fd0-4f78-b973-23a62d4c8f20","Type":"ContainerDied","Data":"cfe2c493ba7cf624749645825e95924b635aeeac3c1207d0365904843641aeda"} Dec 08 19:40:29 crc kubenswrapper[5125]: I1208 19:40:29.286376 5125 generic.go:358] "Generic (PLEG): container finished" podID="7a97abc6-7fd0-4f78-b973-23a62d4c8f20" containerID="64f0c832265472936bbc4bd3596fe8900468cb6e43635cc1a4398e8827b14b22" exitCode=0 Dec 08 19:40:29 crc kubenswrapper[5125]: I1208 19:40:29.286544 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5emjmcd" event={"ID":"7a97abc6-7fd0-4f78-b973-23a62d4c8f20","Type":"ContainerDied","Data":"64f0c832265472936bbc4bd3596fe8900468cb6e43635cc1a4398e8827b14b22"} Dec 08 19:40:30 crc kubenswrapper[5125]: I1208 19:40:30.006821 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azkxdx"] Dec 08 19:40:30 crc kubenswrapper[5125]: I1208 19:40:30.025184 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azkxdx" Dec 08 19:40:30 crc kubenswrapper[5125]: I1208 19:40:30.028520 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azkxdx"] Dec 08 19:40:30 crc kubenswrapper[5125]: I1208 19:40:30.066725 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cddb0bdd-79a6-4f1d-b55b-f59262c9a034-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azkxdx\" (UID: \"cddb0bdd-79a6-4f1d-b55b-f59262c9a034\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azkxdx" Dec 08 19:40:30 crc kubenswrapper[5125]: I1208 19:40:30.066822 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bk4k\" (UniqueName: \"kubernetes.io/projected/cddb0bdd-79a6-4f1d-b55b-f59262c9a034-kube-api-access-6bk4k\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azkxdx\" (UID: \"cddb0bdd-79a6-4f1d-b55b-f59262c9a034\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azkxdx" Dec 08 19:40:30 crc kubenswrapper[5125]: I1208 19:40:30.066854 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cddb0bdd-79a6-4f1d-b55b-f59262c9a034-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azkxdx\" (UID: \"cddb0bdd-79a6-4f1d-b55b-f59262c9a034\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azkxdx" Dec 08 19:40:30 crc kubenswrapper[5125]: I1208 19:40:30.167586 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6bk4k\" (UniqueName: \"kubernetes.io/projected/cddb0bdd-79a6-4f1d-b55b-f59262c9a034-kube-api-access-6bk4k\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azkxdx\" (UID: \"cddb0bdd-79a6-4f1d-b55b-f59262c9a034\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azkxdx" Dec 08 19:40:30 crc kubenswrapper[5125]: I1208 19:40:30.167660 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cddb0bdd-79a6-4f1d-b55b-f59262c9a034-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azkxdx\" (UID: \"cddb0bdd-79a6-4f1d-b55b-f59262c9a034\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azkxdx" Dec 08 19:40:30 crc kubenswrapper[5125]: I1208 19:40:30.167777 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cddb0bdd-79a6-4f1d-b55b-f59262c9a034-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azkxdx\" (UID: \"cddb0bdd-79a6-4f1d-b55b-f59262c9a034\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azkxdx" Dec 08 19:40:30 crc kubenswrapper[5125]: I1208 19:40:30.168288 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cddb0bdd-79a6-4f1d-b55b-f59262c9a034-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azkxdx\" (UID: \"cddb0bdd-79a6-4f1d-b55b-f59262c9a034\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azkxdx" Dec 08 19:40:30 crc kubenswrapper[5125]: I1208 19:40:30.168407 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cddb0bdd-79a6-4f1d-b55b-f59262c9a034-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azkxdx\" (UID: \"cddb0bdd-79a6-4f1d-b55b-f59262c9a034\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azkxdx" Dec 08 19:40:30 crc kubenswrapper[5125]: I1208 19:40:30.201589 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bk4k\" (UniqueName: \"kubernetes.io/projected/cddb0bdd-79a6-4f1d-b55b-f59262c9a034-kube-api-access-6bk4k\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azkxdx\" (UID: \"cddb0bdd-79a6-4f1d-b55b-f59262c9a034\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azkxdx" Dec 08 19:40:30 crc kubenswrapper[5125]: I1208 19:40:30.340676 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azkxdx" Dec 08 19:40:30 crc kubenswrapper[5125]: I1208 19:40:30.645404 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5emjmcd" Dec 08 19:40:30 crc kubenswrapper[5125]: I1208 19:40:30.677651 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bn888\" (UniqueName: \"kubernetes.io/projected/7a97abc6-7fd0-4f78-b973-23a62d4c8f20-kube-api-access-bn888\") pod \"7a97abc6-7fd0-4f78-b973-23a62d4c8f20\" (UID: \"7a97abc6-7fd0-4f78-b973-23a62d4c8f20\") " Dec 08 19:40:30 crc kubenswrapper[5125]: I1208 19:40:30.677734 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a97abc6-7fd0-4f78-b973-23a62d4c8f20-bundle\") pod \"7a97abc6-7fd0-4f78-b973-23a62d4c8f20\" (UID: \"7a97abc6-7fd0-4f78-b973-23a62d4c8f20\") " Dec 08 19:40:30 crc kubenswrapper[5125]: I1208 19:40:30.677794 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a97abc6-7fd0-4f78-b973-23a62d4c8f20-util\") pod \"7a97abc6-7fd0-4f78-b973-23a62d4c8f20\" (UID: \"7a97abc6-7fd0-4f78-b973-23a62d4c8f20\") " Dec 08 19:40:30 crc kubenswrapper[5125]: I1208 19:40:30.680391 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a97abc6-7fd0-4f78-b973-23a62d4c8f20-bundle" (OuterVolumeSpecName: "bundle") pod "7a97abc6-7fd0-4f78-b973-23a62d4c8f20" (UID: "7a97abc6-7fd0-4f78-b973-23a62d4c8f20"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:40:30 crc kubenswrapper[5125]: I1208 19:40:30.708888 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a97abc6-7fd0-4f78-b973-23a62d4c8f20-kube-api-access-bn888" (OuterVolumeSpecName: "kube-api-access-bn888") pod "7a97abc6-7fd0-4f78-b973-23a62d4c8f20" (UID: "7a97abc6-7fd0-4f78-b973-23a62d4c8f20"). InnerVolumeSpecName "kube-api-access-bn888". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:40:30 crc kubenswrapper[5125]: I1208 19:40:30.731380 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a97abc6-7fd0-4f78-b973-23a62d4c8f20-util" (OuterVolumeSpecName: "util") pod "7a97abc6-7fd0-4f78-b973-23a62d4c8f20" (UID: "7a97abc6-7fd0-4f78-b973-23a62d4c8f20"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:40:30 crc kubenswrapper[5125]: I1208 19:40:30.780853 5125 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a97abc6-7fd0-4f78-b973-23a62d4c8f20-util\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:30 crc kubenswrapper[5125]: I1208 19:40:30.780890 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bn888\" (UniqueName: \"kubernetes.io/projected/7a97abc6-7fd0-4f78-b973-23a62d4c8f20-kube-api-access-bn888\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:30 crc kubenswrapper[5125]: I1208 19:40:30.780902 5125 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a97abc6-7fd0-4f78-b973-23a62d4c8f20-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:30 crc kubenswrapper[5125]: I1208 19:40:30.821056 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azkxdx"] Dec 08 19:40:30 crc kubenswrapper[5125]: W1208 19:40:30.827363 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcddb0bdd_79a6_4f1d_b55b_f59262c9a034.slice/crio-cf433e55311ae043680296182fa13d771434d15af5fe96274767e3bf590f8a34 WatchSource:0}: Error finding container cf433e55311ae043680296182fa13d771434d15af5fe96274767e3bf590f8a34: Status 404 returned error can't find the container with id cf433e55311ae043680296182fa13d771434d15af5fe96274767e3bf590f8a34 Dec 08 19:40:31 crc kubenswrapper[5125]: I1208 19:40:31.297910 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5emjmcd" event={"ID":"7a97abc6-7fd0-4f78-b973-23a62d4c8f20","Type":"ContainerDied","Data":"556ae2b43076e85a5ab947d43ce5dbe56ff8eeeee17ea9ce3d88092e2b33a5e3"} Dec 08 19:40:31 crc kubenswrapper[5125]: I1208 19:40:31.298222 5125 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="556ae2b43076e85a5ab947d43ce5dbe56ff8eeeee17ea9ce3d88092e2b33a5e3" Dec 08 19:40:31 crc kubenswrapper[5125]: I1208 19:40:31.297942 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5emjmcd" Dec 08 19:40:31 crc kubenswrapper[5125]: I1208 19:40:31.299229 5125 generic.go:358] "Generic (PLEG): container finished" podID="cddb0bdd-79a6-4f1d-b55b-f59262c9a034" containerID="568e1dd5abc75004bce790e88cb6e4a03422ff63918c75466b6c295cffa5d631" exitCode=0 Dec 08 19:40:31 crc kubenswrapper[5125]: I1208 19:40:31.299297 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azkxdx" event={"ID":"cddb0bdd-79a6-4f1d-b55b-f59262c9a034","Type":"ContainerDied","Data":"568e1dd5abc75004bce790e88cb6e4a03422ff63918c75466b6c295cffa5d631"} Dec 08 19:40:31 crc kubenswrapper[5125]: I1208 19:40:31.299358 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azkxdx" event={"ID":"cddb0bdd-79a6-4f1d-b55b-f59262c9a034","Type":"ContainerStarted","Data":"cf433e55311ae043680296182fa13d771434d15af5fe96274767e3bf590f8a34"} Dec 08 19:40:35 crc kubenswrapper[5125]: I1208 19:40:35.337622 5125 generic.go:358] "Generic (PLEG): container finished" podID="cddb0bdd-79a6-4f1d-b55b-f59262c9a034" containerID="01e358e3a384fc6a955e55bb7256e34d55ae203c2c728af9089fb112743cbfa3" exitCode=0 Dec 08 19:40:35 crc kubenswrapper[5125]: I1208 19:40:35.337740 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azkxdx" event={"ID":"cddb0bdd-79a6-4f1d-b55b-f59262c9a034","Type":"ContainerDied","Data":"01e358e3a384fc6a955e55bb7256e34d55ae203c2c728af9089fb112743cbfa3"} Dec 08 19:40:36 crc kubenswrapper[5125]: I1208 19:40:36.344630 5125 generic.go:358] "Generic (PLEG): container finished" podID="cddb0bdd-79a6-4f1d-b55b-f59262c9a034" containerID="be8e0325a276428dcefeef88348e042365c69f950639c117c96db64210461293" exitCode=0 Dec 08 19:40:36 crc kubenswrapper[5125]: I1208 19:40:36.344679 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azkxdx" event={"ID":"cddb0bdd-79a6-4f1d-b55b-f59262c9a034","Type":"ContainerDied","Data":"be8e0325a276428dcefeef88348e042365c69f950639c117c96db64210461293"} Dec 08 19:40:37 crc kubenswrapper[5125]: I1208 19:40:37.563997 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azkxdx" Dec 08 19:40:37 crc kubenswrapper[5125]: I1208 19:40:37.670316 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bk4k\" (UniqueName: \"kubernetes.io/projected/cddb0bdd-79a6-4f1d-b55b-f59262c9a034-kube-api-access-6bk4k\") pod \"cddb0bdd-79a6-4f1d-b55b-f59262c9a034\" (UID: \"cddb0bdd-79a6-4f1d-b55b-f59262c9a034\") " Dec 08 19:40:37 crc kubenswrapper[5125]: I1208 19:40:37.670412 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cddb0bdd-79a6-4f1d-b55b-f59262c9a034-bundle\") pod \"cddb0bdd-79a6-4f1d-b55b-f59262c9a034\" (UID: \"cddb0bdd-79a6-4f1d-b55b-f59262c9a034\") " Dec 08 19:40:37 crc kubenswrapper[5125]: I1208 19:40:37.670442 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cddb0bdd-79a6-4f1d-b55b-f59262c9a034-util\") pod \"cddb0bdd-79a6-4f1d-b55b-f59262c9a034\" (UID: \"cddb0bdd-79a6-4f1d-b55b-f59262c9a034\") " Dec 08 19:40:37 crc kubenswrapper[5125]: I1208 19:40:37.671438 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cddb0bdd-79a6-4f1d-b55b-f59262c9a034-bundle" (OuterVolumeSpecName: "bundle") pod "cddb0bdd-79a6-4f1d-b55b-f59262c9a034" (UID: "cddb0bdd-79a6-4f1d-b55b-f59262c9a034"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:40:37 crc kubenswrapper[5125]: I1208 19:40:37.679196 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cddb0bdd-79a6-4f1d-b55b-f59262c9a034-util" (OuterVolumeSpecName: "util") pod "cddb0bdd-79a6-4f1d-b55b-f59262c9a034" (UID: "cddb0bdd-79a6-4f1d-b55b-f59262c9a034"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:40:37 crc kubenswrapper[5125]: I1208 19:40:37.705316 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cddb0bdd-79a6-4f1d-b55b-f59262c9a034-kube-api-access-6bk4k" (OuterVolumeSpecName: "kube-api-access-6bk4k") pod "cddb0bdd-79a6-4f1d-b55b-f59262c9a034" (UID: "cddb0bdd-79a6-4f1d-b55b-f59262c9a034"). InnerVolumeSpecName "kube-api-access-6bk4k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:40:37 crc kubenswrapper[5125]: I1208 19:40:37.771576 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6bk4k\" (UniqueName: \"kubernetes.io/projected/cddb0bdd-79a6-4f1d-b55b-f59262c9a034-kube-api-access-6bk4k\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:37 crc kubenswrapper[5125]: I1208 19:40:37.771597 5125 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cddb0bdd-79a6-4f1d-b55b-f59262c9a034-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:37 crc kubenswrapper[5125]: I1208 19:40:37.771625 5125 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cddb0bdd-79a6-4f1d-b55b-f59262c9a034-util\") on node \"crc\" DevicePath \"\"" Dec 08 19:40:37 crc kubenswrapper[5125]: I1208 19:40:37.915539 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-xr89d"] Dec 08 19:40:37 crc kubenswrapper[5125]: I1208 19:40:37.916513 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7a97abc6-7fd0-4f78-b973-23a62d4c8f20" containerName="extract" Dec 08 19:40:37 crc kubenswrapper[5125]: I1208 19:40:37.916645 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a97abc6-7fd0-4f78-b973-23a62d4c8f20" containerName="extract" Dec 08 19:40:37 crc kubenswrapper[5125]: I1208 19:40:37.916776 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cddb0bdd-79a6-4f1d-b55b-f59262c9a034" containerName="pull" Dec 08 19:40:37 crc kubenswrapper[5125]: I1208 19:40:37.916841 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="cddb0bdd-79a6-4f1d-b55b-f59262c9a034" containerName="pull" Dec 08 19:40:37 crc kubenswrapper[5125]: I1208 19:40:37.916899 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cddb0bdd-79a6-4f1d-b55b-f59262c9a034" containerName="util" Dec 08 19:40:37 crc kubenswrapper[5125]: I1208 19:40:37.916973 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="cddb0bdd-79a6-4f1d-b55b-f59262c9a034" containerName="util" Dec 08 19:40:37 crc kubenswrapper[5125]: I1208 19:40:37.917055 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cddb0bdd-79a6-4f1d-b55b-f59262c9a034" containerName="extract" Dec 08 19:40:37 crc kubenswrapper[5125]: I1208 19:40:37.917130 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="cddb0bdd-79a6-4f1d-b55b-f59262c9a034" containerName="extract" Dec 08 19:40:37 crc kubenswrapper[5125]: I1208 19:40:37.917197 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7a97abc6-7fd0-4f78-b973-23a62d4c8f20" containerName="pull" Dec 08 19:40:37 crc kubenswrapper[5125]: I1208 19:40:37.917248 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a97abc6-7fd0-4f78-b973-23a62d4c8f20" containerName="pull" Dec 08 19:40:37 crc kubenswrapper[5125]: I1208 19:40:37.917310 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7a97abc6-7fd0-4f78-b973-23a62d4c8f20" containerName="util" Dec 08 19:40:37 crc kubenswrapper[5125]: I1208 19:40:37.917366 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a97abc6-7fd0-4f78-b973-23a62d4c8f20" containerName="util" Dec 08 19:40:37 crc kubenswrapper[5125]: I1208 19:40:37.917501 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="7a97abc6-7fd0-4f78-b973-23a62d4c8f20" containerName="extract" Dec 08 19:40:37 crc kubenswrapper[5125]: I1208 19:40:37.917561 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="cddb0bdd-79a6-4f1d-b55b-f59262c9a034" containerName="extract" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.023209 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-xr89d"] Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.023392 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-xr89d" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.025231 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-t8d8s\"" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.025413 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.026166 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.050335 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-895857757-fhqt9"] Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.055972 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-895857757-fhqt9" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.058097 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-m52cs\"" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.058097 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.065398 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-895857757-56pk4"] Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.068904 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-895857757-56pk4" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.070154 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-895857757-fhqt9"] Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.080675 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-895857757-56pk4"] Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.177597 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c9c63e62-3efb-430d-b680-6e55132e6a13-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-895857757-fhqt9\" (UID: \"c9c63e62-3efb-430d-b680-6e55132e6a13\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-895857757-fhqt9" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.177804 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c9c63e62-3efb-430d-b680-6e55132e6a13-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-895857757-fhqt9\" (UID: \"c9c63e62-3efb-430d-b680-6e55132e6a13\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-895857757-fhqt9" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.177950 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzzbg\" (UniqueName: \"kubernetes.io/projected/5f1d560a-7b82-4338-b856-4d6139d58ed2-kube-api-access-dzzbg\") pod \"obo-prometheus-operator-86648f486b-xr89d\" (UID: \"5f1d560a-7b82-4338-b856-4d6139d58ed2\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-xr89d" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.178036 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1fef3849-8ca2-4973-8455-fb200f6d31fd-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-895857757-56pk4\" (UID: \"1fef3849-8ca2-4973-8455-fb200f6d31fd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-895857757-56pk4" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.178153 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1fef3849-8ca2-4973-8455-fb200f6d31fd-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-895857757-56pk4\" (UID: \"1fef3849-8ca2-4973-8455-fb200f6d31fd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-895857757-56pk4" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.228344 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-78c97476f4-nnkw5"] Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.280253 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c9c63e62-3efb-430d-b680-6e55132e6a13-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-895857757-fhqt9\" (UID: \"c9c63e62-3efb-430d-b680-6e55132e6a13\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-895857757-fhqt9" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.280332 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dzzbg\" (UniqueName: \"kubernetes.io/projected/5f1d560a-7b82-4338-b856-4d6139d58ed2-kube-api-access-dzzbg\") pod \"obo-prometheus-operator-86648f486b-xr89d\" (UID: \"5f1d560a-7b82-4338-b856-4d6139d58ed2\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-xr89d" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.280363 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1fef3849-8ca2-4973-8455-fb200f6d31fd-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-895857757-56pk4\" (UID: \"1fef3849-8ca2-4973-8455-fb200f6d31fd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-895857757-56pk4" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.280414 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1fef3849-8ca2-4973-8455-fb200f6d31fd-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-895857757-56pk4\" (UID: \"1fef3849-8ca2-4973-8455-fb200f6d31fd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-895857757-56pk4" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.280457 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c9c63e62-3efb-430d-b680-6e55132e6a13-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-895857757-fhqt9\" (UID: \"c9c63e62-3efb-430d-b680-6e55132e6a13\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-895857757-fhqt9" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.290693 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1fef3849-8ca2-4973-8455-fb200f6d31fd-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-895857757-56pk4\" (UID: \"1fef3849-8ca2-4973-8455-fb200f6d31fd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-895857757-56pk4" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.290762 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c9c63e62-3efb-430d-b680-6e55132e6a13-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-895857757-fhqt9\" (UID: \"c9c63e62-3efb-430d-b680-6e55132e6a13\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-895857757-fhqt9" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.291129 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1fef3849-8ca2-4973-8455-fb200f6d31fd-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-895857757-56pk4\" (UID: \"1fef3849-8ca2-4973-8455-fb200f6d31fd\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-895857757-56pk4" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.294185 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c9c63e62-3efb-430d-b680-6e55132e6a13-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-895857757-fhqt9\" (UID: \"c9c63e62-3efb-430d-b680-6e55132e6a13\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-895857757-fhqt9" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.299734 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzzbg\" (UniqueName: \"kubernetes.io/projected/5f1d560a-7b82-4338-b856-4d6139d58ed2-kube-api-access-dzzbg\") pod \"obo-prometheus-operator-86648f486b-xr89d\" (UID: \"5f1d560a-7b82-4338-b856-4d6139d58ed2\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-xr89d" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.339767 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-xr89d" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.368220 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-895857757-fhqt9" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.380854 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-895857757-56pk4" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.577110 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azkxdx" event={"ID":"cddb0bdd-79a6-4f1d-b55b-f59262c9a034","Type":"ContainerDied","Data":"cf433e55311ae043680296182fa13d771434d15af5fe96274767e3bf590f8a34"} Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.577172 5125 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf433e55311ae043680296182fa13d771434d15af5fe96274767e3bf590f8a34" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.577188 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-nnkw5"] Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.577204 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-dgtjc"] Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.578971 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-nnkw5" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.581118 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.581620 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-tlp2z\"" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.632159 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931azkxdx" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.633818 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-dgtjc"] Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.633846 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-bd474cd6c-7qmc5"] Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.633994 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-dgtjc" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.640736 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-snx79\"" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.641661 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-bd474cd6c-7qmc5"] Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.641820 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-bd474cd6c-7qmc5" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.646954 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.647241 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.647390 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-h9fsr\"" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.647516 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.687092 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1b3912c9-819a-4575-813c-2bfc6ab56d9c-observability-operator-tls\") pod \"observability-operator-78c97476f4-nnkw5\" (UID: \"1b3912c9-819a-4575-813c-2bfc6ab56d9c\") " pod="openshift-operators/observability-operator-78c97476f4-nnkw5" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.687466 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5fz9\" (UniqueName: \"kubernetes.io/projected/1b3912c9-819a-4575-813c-2bfc6ab56d9c-kube-api-access-g5fz9\") pod \"observability-operator-78c97476f4-nnkw5\" (UID: \"1b3912c9-819a-4575-813c-2bfc6ab56d9c\") " pod="openshift-operators/observability-operator-78c97476f4-nnkw5" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.788480 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1b3912c9-819a-4575-813c-2bfc6ab56d9c-observability-operator-tls\") pod \"observability-operator-78c97476f4-nnkw5\" (UID: \"1b3912c9-819a-4575-813c-2bfc6ab56d9c\") " pod="openshift-operators/observability-operator-78c97476f4-nnkw5" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.788569 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g5fz9\" (UniqueName: \"kubernetes.io/projected/1b3912c9-819a-4575-813c-2bfc6ab56d9c-kube-api-access-g5fz9\") pod \"observability-operator-78c97476f4-nnkw5\" (UID: \"1b3912c9-819a-4575-813c-2bfc6ab56d9c\") " pod="openshift-operators/observability-operator-78c97476f4-nnkw5" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.788642 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nppb\" (UniqueName: \"kubernetes.io/projected/1bcebedc-7100-44a7-ad7e-f1b8709c53c7-kube-api-access-2nppb\") pod \"perses-operator-68bdb49cbf-dgtjc\" (UID: \"1bcebedc-7100-44a7-ad7e-f1b8709c53c7\") " pod="openshift-operators/perses-operator-68bdb49cbf-dgtjc" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.789845 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr5lq\" (UniqueName: \"kubernetes.io/projected/49ce59eb-30b9-40a4-b52d-df8e481c67ba-kube-api-access-wr5lq\") pod \"elastic-operator-bd474cd6c-7qmc5\" (UID: \"49ce59eb-30b9-40a4-b52d-df8e481c67ba\") " pod="service-telemetry/elastic-operator-bd474cd6c-7qmc5" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.789879 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/49ce59eb-30b9-40a4-b52d-df8e481c67ba-webhook-cert\") pod \"elastic-operator-bd474cd6c-7qmc5\" (UID: \"49ce59eb-30b9-40a4-b52d-df8e481c67ba\") " pod="service-telemetry/elastic-operator-bd474cd6c-7qmc5" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.789965 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/1bcebedc-7100-44a7-ad7e-f1b8709c53c7-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-dgtjc\" (UID: \"1bcebedc-7100-44a7-ad7e-f1b8709c53c7\") " pod="openshift-operators/perses-operator-68bdb49cbf-dgtjc" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.790003 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/49ce59eb-30b9-40a4-b52d-df8e481c67ba-apiservice-cert\") pod \"elastic-operator-bd474cd6c-7qmc5\" (UID: \"49ce59eb-30b9-40a4-b52d-df8e481c67ba\") " pod="service-telemetry/elastic-operator-bd474cd6c-7qmc5" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.797439 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1b3912c9-819a-4575-813c-2bfc6ab56d9c-observability-operator-tls\") pod \"observability-operator-78c97476f4-nnkw5\" (UID: \"1b3912c9-819a-4575-813c-2bfc6ab56d9c\") " pod="openshift-operators/observability-operator-78c97476f4-nnkw5" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.822461 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5fz9\" (UniqueName: \"kubernetes.io/projected/1b3912c9-819a-4575-813c-2bfc6ab56d9c-kube-api-access-g5fz9\") pod \"observability-operator-78c97476f4-nnkw5\" (UID: \"1b3912c9-819a-4575-813c-2bfc6ab56d9c\") " pod="openshift-operators/observability-operator-78c97476f4-nnkw5" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.896923 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/49ce59eb-30b9-40a4-b52d-df8e481c67ba-webhook-cert\") pod \"elastic-operator-bd474cd6c-7qmc5\" (UID: \"49ce59eb-30b9-40a4-b52d-df8e481c67ba\") " pod="service-telemetry/elastic-operator-bd474cd6c-7qmc5" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.897011 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/1bcebedc-7100-44a7-ad7e-f1b8709c53c7-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-dgtjc\" (UID: \"1bcebedc-7100-44a7-ad7e-f1b8709c53c7\") " pod="openshift-operators/perses-operator-68bdb49cbf-dgtjc" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.897052 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/49ce59eb-30b9-40a4-b52d-df8e481c67ba-apiservice-cert\") pod \"elastic-operator-bd474cd6c-7qmc5\" (UID: \"49ce59eb-30b9-40a4-b52d-df8e481c67ba\") " pod="service-telemetry/elastic-operator-bd474cd6c-7qmc5" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.897179 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2nppb\" (UniqueName: \"kubernetes.io/projected/1bcebedc-7100-44a7-ad7e-f1b8709c53c7-kube-api-access-2nppb\") pod \"perses-operator-68bdb49cbf-dgtjc\" (UID: \"1bcebedc-7100-44a7-ad7e-f1b8709c53c7\") " pod="openshift-operators/perses-operator-68bdb49cbf-dgtjc" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.897219 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wr5lq\" (UniqueName: \"kubernetes.io/projected/49ce59eb-30b9-40a4-b52d-df8e481c67ba-kube-api-access-wr5lq\") pod \"elastic-operator-bd474cd6c-7qmc5\" (UID: \"49ce59eb-30b9-40a4-b52d-df8e481c67ba\") " pod="service-telemetry/elastic-operator-bd474cd6c-7qmc5" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.898206 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/1bcebedc-7100-44a7-ad7e-f1b8709c53c7-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-dgtjc\" (UID: \"1bcebedc-7100-44a7-ad7e-f1b8709c53c7\") " pod="openshift-operators/perses-operator-68bdb49cbf-dgtjc" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.901171 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/49ce59eb-30b9-40a4-b52d-df8e481c67ba-apiservice-cert\") pod \"elastic-operator-bd474cd6c-7qmc5\" (UID: \"49ce59eb-30b9-40a4-b52d-df8e481c67ba\") " pod="service-telemetry/elastic-operator-bd474cd6c-7qmc5" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.901399 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/49ce59eb-30b9-40a4-b52d-df8e481c67ba-webhook-cert\") pod \"elastic-operator-bd474cd6c-7qmc5\" (UID: \"49ce59eb-30b9-40a4-b52d-df8e481c67ba\") " pod="service-telemetry/elastic-operator-bd474cd6c-7qmc5" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.903984 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-nnkw5" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.915592 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr5lq\" (UniqueName: \"kubernetes.io/projected/49ce59eb-30b9-40a4-b52d-df8e481c67ba-kube-api-access-wr5lq\") pod \"elastic-operator-bd474cd6c-7qmc5\" (UID: \"49ce59eb-30b9-40a4-b52d-df8e481c67ba\") " pod="service-telemetry/elastic-operator-bd474cd6c-7qmc5" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.917594 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nppb\" (UniqueName: \"kubernetes.io/projected/1bcebedc-7100-44a7-ad7e-f1b8709c53c7-kube-api-access-2nppb\") pod \"perses-operator-68bdb49cbf-dgtjc\" (UID: \"1bcebedc-7100-44a7-ad7e-f1b8709c53c7\") " pod="openshift-operators/perses-operator-68bdb49cbf-dgtjc" Dec 08 19:40:38 crc kubenswrapper[5125]: I1208 19:40:38.976907 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-895857757-fhqt9"] Dec 08 19:40:39 crc kubenswrapper[5125]: W1208 19:40:39.007757 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9c63e62_3efb_430d_b680_6e55132e6a13.slice/crio-49336af82531467bcd8fa197a2829494eb19d5ebcb659df0d965842750e822a0 WatchSource:0}: Error finding container 49336af82531467bcd8fa197a2829494eb19d5ebcb659df0d965842750e822a0: Status 404 returned error can't find the container with id 49336af82531467bcd8fa197a2829494eb19d5ebcb659df0d965842750e822a0 Dec 08 19:40:39 crc kubenswrapper[5125]: I1208 19:40:39.015869 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-dgtjc" Dec 08 19:40:39 crc kubenswrapper[5125]: I1208 19:40:39.030387 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-bd474cd6c-7qmc5" Dec 08 19:40:39 crc kubenswrapper[5125]: I1208 19:40:39.095776 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-895857757-56pk4"] Dec 08 19:40:39 crc kubenswrapper[5125]: I1208 19:40:39.222972 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-xr89d"] Dec 08 19:40:39 crc kubenswrapper[5125]: W1208 19:40:39.236709 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f1d560a_7b82_4338_b856_4d6139d58ed2.slice/crio-421463d59c4063f696b597349e4b2da9d912c6cd7e3dec01720f80fa240ba538 WatchSource:0}: Error finding container 421463d59c4063f696b597349e4b2da9d912c6cd7e3dec01720f80fa240ba538: Status 404 returned error can't find the container with id 421463d59c4063f696b597349e4b2da9d912c6cd7e3dec01720f80fa240ba538 Dec 08 19:40:39 crc kubenswrapper[5125]: I1208 19:40:39.274827 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-nnkw5"] Dec 08 19:40:39 crc kubenswrapper[5125]: W1208 19:40:39.296425 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b3912c9_819a_4575_813c_2bfc6ab56d9c.slice/crio-7922eb747a2b938f0c3b6011279f13675321fe65b9f2b3f6d5e037fe2bd06047 WatchSource:0}: Error finding container 7922eb747a2b938f0c3b6011279f13675321fe65b9f2b3f6d5e037fe2bd06047: Status 404 returned error can't find the container with id 7922eb747a2b938f0c3b6011279f13675321fe65b9f2b3f6d5e037fe2bd06047 Dec 08 19:40:39 crc kubenswrapper[5125]: I1208 19:40:39.365567 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-xr89d" event={"ID":"5f1d560a-7b82-4338-b856-4d6139d58ed2","Type":"ContainerStarted","Data":"421463d59c4063f696b597349e4b2da9d912c6cd7e3dec01720f80fa240ba538"} Dec 08 19:40:39 crc kubenswrapper[5125]: I1208 19:40:39.366811 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-nnkw5" event={"ID":"1b3912c9-819a-4575-813c-2bfc6ab56d9c","Type":"ContainerStarted","Data":"7922eb747a2b938f0c3b6011279f13675321fe65b9f2b3f6d5e037fe2bd06047"} Dec 08 19:40:39 crc kubenswrapper[5125]: I1208 19:40:39.367984 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-895857757-fhqt9" event={"ID":"c9c63e62-3efb-430d-b680-6e55132e6a13","Type":"ContainerStarted","Data":"49336af82531467bcd8fa197a2829494eb19d5ebcb659df0d965842750e822a0"} Dec 08 19:40:39 crc kubenswrapper[5125]: I1208 19:40:39.369351 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-895857757-56pk4" event={"ID":"1fef3849-8ca2-4973-8455-fb200f6d31fd","Type":"ContainerStarted","Data":"2b3fab8d4c26d5435b5d7d9dc41f34b91c12ea61ccd9325798fad23c3869c7e3"} Dec 08 19:40:39 crc kubenswrapper[5125]: I1208 19:40:39.527930 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-dgtjc"] Dec 08 19:40:39 crc kubenswrapper[5125]: W1208 19:40:39.532695 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bcebedc_7100_44a7_ad7e_f1b8709c53c7.slice/crio-40aff971df261bda5fb1586dc4326c9f4a856bb50e91e87caddb0304772f5bde WatchSource:0}: Error finding container 40aff971df261bda5fb1586dc4326c9f4a856bb50e91e87caddb0304772f5bde: Status 404 returned error can't find the container with id 40aff971df261bda5fb1586dc4326c9f4a856bb50e91e87caddb0304772f5bde Dec 08 19:40:39 crc kubenswrapper[5125]: I1208 19:40:39.597487 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-bd474cd6c-7qmc5"] Dec 08 19:40:39 crc kubenswrapper[5125]: W1208 19:40:39.599124 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49ce59eb_30b9_40a4_b52d_df8e481c67ba.slice/crio-bf41b8dd436fb6f4f0467796104d51506db13f6c6e84936e7bc7515b47ba0700 WatchSource:0}: Error finding container bf41b8dd436fb6f4f0467796104d51506db13f6c6e84936e7bc7515b47ba0700: Status 404 returned error can't find the container with id bf41b8dd436fb6f4f0467796104d51506db13f6c6e84936e7bc7515b47ba0700 Dec 08 19:40:40 crc kubenswrapper[5125]: I1208 19:40:40.217205 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-rvbzh" Dec 08 19:40:40 crc kubenswrapper[5125]: I1208 19:40:40.272462 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-hgxtj"] Dec 08 19:40:40 crc kubenswrapper[5125]: I1208 19:40:40.423951 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-bd474cd6c-7qmc5" event={"ID":"49ce59eb-30b9-40a4-b52d-df8e481c67ba","Type":"ContainerStarted","Data":"bf41b8dd436fb6f4f0467796104d51506db13f6c6e84936e7bc7515b47ba0700"} Dec 08 19:40:40 crc kubenswrapper[5125]: I1208 19:40:40.434882 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-dgtjc" event={"ID":"1bcebedc-7100-44a7-ad7e-f1b8709c53c7","Type":"ContainerStarted","Data":"40aff971df261bda5fb1586dc4326c9f4a856bb50e91e87caddb0304772f5bde"} Dec 08 19:40:51 crc kubenswrapper[5125]: I1208 19:40:51.200752 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rnnj9"] Dec 08 19:40:51 crc kubenswrapper[5125]: I1208 19:40:51.210482 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rnnj9" Dec 08 19:40:51 crc kubenswrapper[5125]: I1208 19:40:51.222047 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-bgtqh\"" Dec 08 19:40:51 crc kubenswrapper[5125]: I1208 19:40:51.223164 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 19:40:51 crc kubenswrapper[5125]: I1208 19:40:51.223754 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:40:51 crc kubenswrapper[5125]: I1208 19:40:51.228185 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rnnj9"] Dec 08 19:40:51 crc kubenswrapper[5125]: I1208 19:40:51.298731 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3ebf457c-8593-4632-af56-29f5ff36bd3d-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-rnnj9\" (UID: \"3ebf457c-8593-4632-af56-29f5ff36bd3d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rnnj9" Dec 08 19:40:51 crc kubenswrapper[5125]: I1208 19:40:51.299111 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgbjl\" (UniqueName: \"kubernetes.io/projected/3ebf457c-8593-4632-af56-29f5ff36bd3d-kube-api-access-bgbjl\") pod \"cert-manager-operator-controller-manager-64c74584c4-rnnj9\" (UID: \"3ebf457c-8593-4632-af56-29f5ff36bd3d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rnnj9" Dec 08 19:40:51 crc kubenswrapper[5125]: I1208 19:40:51.400985 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3ebf457c-8593-4632-af56-29f5ff36bd3d-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-rnnj9\" (UID: \"3ebf457c-8593-4632-af56-29f5ff36bd3d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rnnj9" Dec 08 19:40:51 crc kubenswrapper[5125]: I1208 19:40:51.401048 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bgbjl\" (UniqueName: \"kubernetes.io/projected/3ebf457c-8593-4632-af56-29f5ff36bd3d-kube-api-access-bgbjl\") pod \"cert-manager-operator-controller-manager-64c74584c4-rnnj9\" (UID: \"3ebf457c-8593-4632-af56-29f5ff36bd3d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rnnj9" Dec 08 19:40:51 crc kubenswrapper[5125]: I1208 19:40:51.401857 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3ebf457c-8593-4632-af56-29f5ff36bd3d-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-rnnj9\" (UID: \"3ebf457c-8593-4632-af56-29f5ff36bd3d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rnnj9" Dec 08 19:40:51 crc kubenswrapper[5125]: I1208 19:40:51.423035 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgbjl\" (UniqueName: \"kubernetes.io/projected/3ebf457c-8593-4632-af56-29f5ff36bd3d-kube-api-access-bgbjl\") pod \"cert-manager-operator-controller-manager-64c74584c4-rnnj9\" (UID: \"3ebf457c-8593-4632-af56-29f5ff36bd3d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rnnj9" Dec 08 19:40:51 crc kubenswrapper[5125]: I1208 19:40:51.534958 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rnnj9" Dec 08 19:40:54 crc kubenswrapper[5125]: I1208 19:40:54.007387 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rnnj9"] Dec 08 19:40:54 crc kubenswrapper[5125]: W1208 19:40:54.033848 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ebf457c_8593_4632_af56_29f5ff36bd3d.slice/crio-49e9ab46ecf04d2ea00e5a246a867044196e41db45f452aa47b8822f07592b9c WatchSource:0}: Error finding container 49e9ab46ecf04d2ea00e5a246a867044196e41db45f452aa47b8822f07592b9c: Status 404 returned error can't find the container with id 49e9ab46ecf04d2ea00e5a246a867044196e41db45f452aa47b8822f07592b9c Dec 08 19:40:54 crc kubenswrapper[5125]: I1208 19:40:54.598780 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-xr89d" event={"ID":"5f1d560a-7b82-4338-b856-4d6139d58ed2","Type":"ContainerStarted","Data":"e8c94c36a0cc7a7009d47632d3d6e349081b93813e910919c558467a3b86dc34"} Dec 08 19:40:54 crc kubenswrapper[5125]: I1208 19:40:54.600100 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-nnkw5" event={"ID":"1b3912c9-819a-4575-813c-2bfc6ab56d9c","Type":"ContainerStarted","Data":"2683a36f9246747e636c6080a6f5264af5d95b2b2a0f879a99c8ee8723ef947e"} Dec 08 19:40:54 crc kubenswrapper[5125]: I1208 19:40:54.600387 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-78c97476f4-nnkw5" Dec 08 19:40:54 crc kubenswrapper[5125]: I1208 19:40:54.601153 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rnnj9" event={"ID":"3ebf457c-8593-4632-af56-29f5ff36bd3d","Type":"ContainerStarted","Data":"49e9ab46ecf04d2ea00e5a246a867044196e41db45f452aa47b8822f07592b9c"} Dec 08 19:40:54 crc kubenswrapper[5125]: I1208 19:40:54.602308 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-78c97476f4-nnkw5" Dec 08 19:40:54 crc kubenswrapper[5125]: I1208 19:40:54.602449 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-895857757-fhqt9" event={"ID":"c9c63e62-3efb-430d-b680-6e55132e6a13","Type":"ContainerStarted","Data":"d971208754eb33fa93acde3519ae2a4dcdc66f9187bf55f0aaffb9d01aaed933"} Dec 08 19:40:54 crc kubenswrapper[5125]: I1208 19:40:54.604680 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-bd474cd6c-7qmc5" event={"ID":"49ce59eb-30b9-40a4-b52d-df8e481c67ba","Type":"ContainerStarted","Data":"fd9beabeb2594cd6f5ed1249cb0ef86589c1a31da3a6aa87e8231a6cc71d7041"} Dec 08 19:40:54 crc kubenswrapper[5125]: I1208 19:40:54.606400 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-dgtjc" event={"ID":"1bcebedc-7100-44a7-ad7e-f1b8709c53c7","Type":"ContainerStarted","Data":"dd74ae04bdf4ca439f1904d350ee8a2278bbc3cff132c140ae315a6b104141b3"} Dec 08 19:40:54 crc kubenswrapper[5125]: I1208 19:40:54.606797 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-68bdb49cbf-dgtjc" Dec 08 19:40:54 crc kubenswrapper[5125]: I1208 19:40:54.607865 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-895857757-56pk4" event={"ID":"1fef3849-8ca2-4973-8455-fb200f6d31fd","Type":"ContainerStarted","Data":"55001dc0d6fcc93e49fd0f2960616d808e848cd20558e96e5fcdd0ac79a29942"} Dec 08 19:40:54 crc kubenswrapper[5125]: I1208 19:40:54.618048 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-86648f486b-xr89d" podStartSLOduration=3.011583876 podStartE2EDuration="17.618026941s" podCreationTimestamp="2025-12-08 19:40:37 +0000 UTC" firstStartedPulling="2025-12-08 19:40:39.239335911 +0000 UTC m=+696.009826185" lastFinishedPulling="2025-12-08 19:40:53.845778976 +0000 UTC m=+710.616269250" observedRunningTime="2025-12-08 19:40:54.617000843 +0000 UTC m=+711.387491127" watchObservedRunningTime="2025-12-08 19:40:54.618026941 +0000 UTC m=+711.388517215" Dec 08 19:40:54 crc kubenswrapper[5125]: I1208 19:40:54.636683 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-895857757-56pk4" podStartSLOduration=1.906179061 podStartE2EDuration="16.636656054s" podCreationTimestamp="2025-12-08 19:40:38 +0000 UTC" firstStartedPulling="2025-12-08 19:40:39.113663658 +0000 UTC m=+695.884153932" lastFinishedPulling="2025-12-08 19:40:53.844140651 +0000 UTC m=+710.614630925" observedRunningTime="2025-12-08 19:40:54.635181014 +0000 UTC m=+711.405671318" watchObservedRunningTime="2025-12-08 19:40:54.636656054 +0000 UTC m=+711.407146328" Dec 08 19:40:54 crc kubenswrapper[5125]: I1208 19:40:54.658636 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-78c97476f4-nnkw5" podStartSLOduration=2.160181972 podStartE2EDuration="16.65862047s" podCreationTimestamp="2025-12-08 19:40:38 +0000 UTC" firstStartedPulling="2025-12-08 19:40:39.303908251 +0000 UTC m=+696.074398525" lastFinishedPulling="2025-12-08 19:40:53.802346749 +0000 UTC m=+710.572837023" observedRunningTime="2025-12-08 19:40:54.655455963 +0000 UTC m=+711.425946247" watchObservedRunningTime="2025-12-08 19:40:54.65862047 +0000 UTC m=+711.429110744" Dec 08 19:40:54 crc kubenswrapper[5125]: I1208 19:40:54.694522 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-68bdb49cbf-dgtjc" podStartSLOduration=2.400753543 podStartE2EDuration="16.694502449s" podCreationTimestamp="2025-12-08 19:40:38 +0000 UTC" firstStartedPulling="2025-12-08 19:40:39.535153075 +0000 UTC m=+696.305643349" lastFinishedPulling="2025-12-08 19:40:53.828901981 +0000 UTC m=+710.599392255" observedRunningTime="2025-12-08 19:40:54.691494606 +0000 UTC m=+711.461984900" watchObservedRunningTime="2025-12-08 19:40:54.694502449 +0000 UTC m=+711.464992713" Dec 08 19:40:54 crc kubenswrapper[5125]: I1208 19:40:54.726031 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-bd474cd6c-7qmc5" podStartSLOduration=10.836795941 podStartE2EDuration="16.726012847s" podCreationTimestamp="2025-12-08 19:40:38 +0000 UTC" firstStartedPulling="2025-12-08 19:40:39.601829412 +0000 UTC m=+696.372319686" lastFinishedPulling="2025-12-08 19:40:45.491046308 +0000 UTC m=+702.261536592" observedRunningTime="2025-12-08 19:40:54.721996756 +0000 UTC m=+711.492487050" watchObservedRunningTime="2025-12-08 19:40:54.726012847 +0000 UTC m=+711.496503121" Dec 08 19:40:54 crc kubenswrapper[5125]: I1208 19:40:54.744991 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-895857757-fhqt9" podStartSLOduration=1.969361873 podStartE2EDuration="16.74496967s" podCreationTimestamp="2025-12-08 19:40:38 +0000 UTC" firstStartedPulling="2025-12-08 19:40:39.026826545 +0000 UTC m=+695.797316819" lastFinishedPulling="2025-12-08 19:40:53.802434342 +0000 UTC m=+710.572924616" observedRunningTime="2025-12-08 19:40:54.739861099 +0000 UTC m=+711.510351393" watchObservedRunningTime="2025-12-08 19:40:54.74496967 +0000 UTC m=+711.515459944" Dec 08 19:40:56 crc kubenswrapper[5125]: I1208 19:40:56.815380 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 19:40:56 crc kubenswrapper[5125]: I1208 19:40:56.826841 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:56 crc kubenswrapper[5125]: I1208 19:40:56.830069 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Dec 08 19:40:56 crc kubenswrapper[5125]: I1208 19:40:56.830512 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Dec 08 19:40:56 crc kubenswrapper[5125]: I1208 19:40:56.831005 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Dec 08 19:40:56 crc kubenswrapper[5125]: I1208 19:40:56.831375 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Dec 08 19:40:56 crc kubenswrapper[5125]: I1208 19:40:56.833045 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-r5d9w\"" Dec 08 19:40:56 crc kubenswrapper[5125]: I1208 19:40:56.833344 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Dec 08 19:40:56 crc kubenswrapper[5125]: I1208 19:40:56.833787 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Dec 08 19:40:56 crc kubenswrapper[5125]: I1208 19:40:56.834056 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Dec 08 19:40:56 crc kubenswrapper[5125]: I1208 19:40:56.834360 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Dec 08 19:40:56 crc kubenswrapper[5125]: I1208 19:40:56.842968 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 19:40:56 crc kubenswrapper[5125]: I1208 19:40:56.953472 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d329c344-3049-4112-9b20-c096a7dd4ad3-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:56 crc kubenswrapper[5125]: I1208 19:40:56.953532 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/d329c344-3049-4112-9b20-c096a7dd4ad3-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:56 crc kubenswrapper[5125]: I1208 19:40:56.953560 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/d329c344-3049-4112-9b20-c096a7dd4ad3-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:56 crc kubenswrapper[5125]: I1208 19:40:56.953717 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/d329c344-3049-4112-9b20-c096a7dd4ad3-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:56 crc kubenswrapper[5125]: I1208 19:40:56.953770 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/d329c344-3049-4112-9b20-c096a7dd4ad3-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:56 crc kubenswrapper[5125]: I1208 19:40:56.953799 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/d329c344-3049-4112-9b20-c096a7dd4ad3-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:56 crc kubenswrapper[5125]: I1208 19:40:56.953885 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/d329c344-3049-4112-9b20-c096a7dd4ad3-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:56 crc kubenswrapper[5125]: I1208 19:40:56.953946 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/d329c344-3049-4112-9b20-c096a7dd4ad3-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:56 crc kubenswrapper[5125]: I1208 19:40:56.954061 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/d329c344-3049-4112-9b20-c096a7dd4ad3-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:56 crc kubenswrapper[5125]: I1208 19:40:56.954097 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/d329c344-3049-4112-9b20-c096a7dd4ad3-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:56 crc kubenswrapper[5125]: I1208 19:40:56.954131 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/d329c344-3049-4112-9b20-c096a7dd4ad3-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:56 crc kubenswrapper[5125]: I1208 19:40:56.954157 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/d329c344-3049-4112-9b20-c096a7dd4ad3-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:56 crc kubenswrapper[5125]: I1208 19:40:56.954178 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/d329c344-3049-4112-9b20-c096a7dd4ad3-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:56 crc kubenswrapper[5125]: I1208 19:40:56.954215 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/d329c344-3049-4112-9b20-c096a7dd4ad3-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:56 crc kubenswrapper[5125]: I1208 19:40:56.954388 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/d329c344-3049-4112-9b20-c096a7dd4ad3-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:57 crc kubenswrapper[5125]: I1208 19:40:57.055879 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/d329c344-3049-4112-9b20-c096a7dd4ad3-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:57 crc kubenswrapper[5125]: I1208 19:40:57.055935 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/d329c344-3049-4112-9b20-c096a7dd4ad3-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:57 crc kubenswrapper[5125]: I1208 19:40:57.055957 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/d329c344-3049-4112-9b20-c096a7dd4ad3-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:57 crc kubenswrapper[5125]: I1208 19:40:57.055975 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/d329c344-3049-4112-9b20-c096a7dd4ad3-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:57 crc kubenswrapper[5125]: I1208 19:40:57.056013 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/d329c344-3049-4112-9b20-c096a7dd4ad3-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:57 crc kubenswrapper[5125]: I1208 19:40:57.056033 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/d329c344-3049-4112-9b20-c096a7dd4ad3-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:57 crc kubenswrapper[5125]: I1208 19:40:57.056052 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/d329c344-3049-4112-9b20-c096a7dd4ad3-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:57 crc kubenswrapper[5125]: I1208 19:40:57.056068 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/d329c344-3049-4112-9b20-c096a7dd4ad3-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:57 crc kubenswrapper[5125]: I1208 19:40:57.056087 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/d329c344-3049-4112-9b20-c096a7dd4ad3-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:57 crc kubenswrapper[5125]: I1208 19:40:57.056114 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/d329c344-3049-4112-9b20-c096a7dd4ad3-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:57 crc kubenswrapper[5125]: I1208 19:40:57.056145 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/d329c344-3049-4112-9b20-c096a7dd4ad3-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:57 crc kubenswrapper[5125]: I1208 19:40:57.056166 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d329c344-3049-4112-9b20-c096a7dd4ad3-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:57 crc kubenswrapper[5125]: I1208 19:40:57.056189 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/d329c344-3049-4112-9b20-c096a7dd4ad3-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:57 crc kubenswrapper[5125]: I1208 19:40:57.056207 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/d329c344-3049-4112-9b20-c096a7dd4ad3-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:57 crc kubenswrapper[5125]: I1208 19:40:57.056228 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/d329c344-3049-4112-9b20-c096a7dd4ad3-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:57 crc kubenswrapper[5125]: I1208 19:40:57.056671 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/d329c344-3049-4112-9b20-c096a7dd4ad3-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:57 crc kubenswrapper[5125]: I1208 19:40:57.058483 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d329c344-3049-4112-9b20-c096a7dd4ad3-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:57 crc kubenswrapper[5125]: I1208 19:40:57.062735 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/d329c344-3049-4112-9b20-c096a7dd4ad3-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:57 crc kubenswrapper[5125]: I1208 19:40:57.063208 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/d329c344-3049-4112-9b20-c096a7dd4ad3-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:57 crc kubenswrapper[5125]: I1208 19:40:57.063350 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/d329c344-3049-4112-9b20-c096a7dd4ad3-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:57 crc kubenswrapper[5125]: I1208 19:40:57.063561 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/d329c344-3049-4112-9b20-c096a7dd4ad3-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:57 crc kubenswrapper[5125]: I1208 19:40:57.064503 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/d329c344-3049-4112-9b20-c096a7dd4ad3-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:57 crc kubenswrapper[5125]: I1208 19:40:57.065289 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/d329c344-3049-4112-9b20-c096a7dd4ad3-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:57 crc kubenswrapper[5125]: I1208 19:40:57.066095 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/d329c344-3049-4112-9b20-c096a7dd4ad3-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:57 crc kubenswrapper[5125]: I1208 19:40:57.066733 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/d329c344-3049-4112-9b20-c096a7dd4ad3-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:57 crc kubenswrapper[5125]: I1208 19:40:57.066834 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/d329c344-3049-4112-9b20-c096a7dd4ad3-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:57 crc kubenswrapper[5125]: I1208 19:40:57.068290 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/d329c344-3049-4112-9b20-c096a7dd4ad3-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:57 crc kubenswrapper[5125]: I1208 19:40:57.069868 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/d329c344-3049-4112-9b20-c096a7dd4ad3-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:57 crc kubenswrapper[5125]: I1208 19:40:57.071689 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/d329c344-3049-4112-9b20-c096a7dd4ad3-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:57 crc kubenswrapper[5125]: I1208 19:40:57.072707 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/d329c344-3049-4112-9b20-c096a7dd4ad3-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"d329c344-3049-4112-9b20-c096a7dd4ad3\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:57 crc kubenswrapper[5125]: I1208 19:40:57.168949 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:40:58 crc kubenswrapper[5125]: I1208 19:40:58.193347 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 19:40:58 crc kubenswrapper[5125]: W1208 19:40:58.199844 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd329c344_3049_4112_9b20_c096a7dd4ad3.slice/crio-7a8c7345526f0ea9bf4a9cf16cb9cd0700dd1ca7be76f4646250017062a81efa WatchSource:0}: Error finding container 7a8c7345526f0ea9bf4a9cf16cb9cd0700dd1ca7be76f4646250017062a81efa: Status 404 returned error can't find the container with id 7a8c7345526f0ea9bf4a9cf16cb9cd0700dd1ca7be76f4646250017062a81efa Dec 08 19:40:58 crc kubenswrapper[5125]: I1208 19:40:58.663277 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"d329c344-3049-4112-9b20-c096a7dd4ad3","Type":"ContainerStarted","Data":"7a8c7345526f0ea9bf4a9cf16cb9cd0700dd1ca7be76f4646250017062a81efa"} Dec 08 19:40:58 crc kubenswrapper[5125]: I1208 19:40:58.665782 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rnnj9" event={"ID":"3ebf457c-8593-4632-af56-29f5ff36bd3d","Type":"ContainerStarted","Data":"9b4ea67943ef4678b75ce376ea45cfbfe92fbb3dc83e384dd4416c27b43dbe10"} Dec 08 19:40:58 crc kubenswrapper[5125]: I1208 19:40:58.684371 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-rnnj9" podStartSLOduration=3.767359636 podStartE2EDuration="7.684351465s" podCreationTimestamp="2025-12-08 19:40:51 +0000 UTC" firstStartedPulling="2025-12-08 19:40:54.037298695 +0000 UTC m=+710.807788969" lastFinishedPulling="2025-12-08 19:40:57.954290524 +0000 UTC m=+714.724780798" observedRunningTime="2025-12-08 19:40:58.682868054 +0000 UTC m=+715.453358328" watchObservedRunningTime="2025-12-08 19:40:58.684351465 +0000 UTC m=+715.454841749" Dec 08 19:41:03 crc kubenswrapper[5125]: I1208 19:41:03.199394 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-4grq2"] Dec 08 19:41:03 crc kubenswrapper[5125]: I1208 19:41:03.215702 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-4grq2"] Dec 08 19:41:03 crc kubenswrapper[5125]: I1208 19:41:03.215861 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-4grq2" Dec 08 19:41:03 crc kubenswrapper[5125]: I1208 19:41:03.221476 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Dec 08 19:41:03 crc kubenswrapper[5125]: I1208 19:41:03.221726 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Dec 08 19:41:03 crc kubenswrapper[5125]: I1208 19:41:03.221931 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-257nq\"" Dec 08 19:41:03 crc kubenswrapper[5125]: I1208 19:41:03.350507 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/69b51416-7923-4adc-b7b5-373297955b92-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-4grq2\" (UID: \"69b51416-7923-4adc-b7b5-373297955b92\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-4grq2" Dec 08 19:41:03 crc kubenswrapper[5125]: I1208 19:41:03.350573 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdnth\" (UniqueName: \"kubernetes.io/projected/69b51416-7923-4adc-b7b5-373297955b92-kube-api-access-bdnth\") pod \"cert-manager-webhook-7894b5b9b4-4grq2\" (UID: \"69b51416-7923-4adc-b7b5-373297955b92\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-4grq2" Dec 08 19:41:03 crc kubenswrapper[5125]: I1208 19:41:03.452302 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/69b51416-7923-4adc-b7b5-373297955b92-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-4grq2\" (UID: \"69b51416-7923-4adc-b7b5-373297955b92\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-4grq2" Dec 08 19:41:03 crc kubenswrapper[5125]: I1208 19:41:03.452364 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bdnth\" (UniqueName: \"kubernetes.io/projected/69b51416-7923-4adc-b7b5-373297955b92-kube-api-access-bdnth\") pod \"cert-manager-webhook-7894b5b9b4-4grq2\" (UID: \"69b51416-7923-4adc-b7b5-373297955b92\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-4grq2" Dec 08 19:41:03 crc kubenswrapper[5125]: I1208 19:41:03.475583 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdnth\" (UniqueName: \"kubernetes.io/projected/69b51416-7923-4adc-b7b5-373297955b92-kube-api-access-bdnth\") pod \"cert-manager-webhook-7894b5b9b4-4grq2\" (UID: \"69b51416-7923-4adc-b7b5-373297955b92\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-4grq2" Dec 08 19:41:03 crc kubenswrapper[5125]: I1208 19:41:03.476207 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/69b51416-7923-4adc-b7b5-373297955b92-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-4grq2\" (UID: \"69b51416-7923-4adc-b7b5-373297955b92\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-4grq2" Dec 08 19:41:03 crc kubenswrapper[5125]: I1208 19:41:03.537651 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-4grq2" Dec 08 19:41:03 crc kubenswrapper[5125]: I1208 19:41:03.623419 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-4ltgz"] Dec 08 19:41:03 crc kubenswrapper[5125]: I1208 19:41:03.633799 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-4ltgz"] Dec 08 19:41:03 crc kubenswrapper[5125]: I1208 19:41:03.633924 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-4ltgz" Dec 08 19:41:03 crc kubenswrapper[5125]: I1208 19:41:03.639904 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-rf6k2\"" Dec 08 19:41:03 crc kubenswrapper[5125]: I1208 19:41:03.757372 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a1a2ce55-cbcd-42be-9b0f-ba5dac01815f-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-4ltgz\" (UID: \"a1a2ce55-cbcd-42be-9b0f-ba5dac01815f\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-4ltgz" Dec 08 19:41:03 crc kubenswrapper[5125]: I1208 19:41:03.757486 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4flmc\" (UniqueName: \"kubernetes.io/projected/a1a2ce55-cbcd-42be-9b0f-ba5dac01815f-kube-api-access-4flmc\") pod \"cert-manager-cainjector-7dbf76d5c8-4ltgz\" (UID: \"a1a2ce55-cbcd-42be-9b0f-ba5dac01815f\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-4ltgz" Dec 08 19:41:03 crc kubenswrapper[5125]: I1208 19:41:03.858728 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4flmc\" (UniqueName: \"kubernetes.io/projected/a1a2ce55-cbcd-42be-9b0f-ba5dac01815f-kube-api-access-4flmc\") pod \"cert-manager-cainjector-7dbf76d5c8-4ltgz\" (UID: \"a1a2ce55-cbcd-42be-9b0f-ba5dac01815f\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-4ltgz" Dec 08 19:41:03 crc kubenswrapper[5125]: I1208 19:41:03.859051 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a1a2ce55-cbcd-42be-9b0f-ba5dac01815f-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-4ltgz\" (UID: \"a1a2ce55-cbcd-42be-9b0f-ba5dac01815f\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-4ltgz" Dec 08 19:41:03 crc kubenswrapper[5125]: I1208 19:41:03.882207 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4flmc\" (UniqueName: \"kubernetes.io/projected/a1a2ce55-cbcd-42be-9b0f-ba5dac01815f-kube-api-access-4flmc\") pod \"cert-manager-cainjector-7dbf76d5c8-4ltgz\" (UID: \"a1a2ce55-cbcd-42be-9b0f-ba5dac01815f\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-4ltgz" Dec 08 19:41:03 crc kubenswrapper[5125]: I1208 19:41:03.887430 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a1a2ce55-cbcd-42be-9b0f-ba5dac01815f-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-4ltgz\" (UID: \"a1a2ce55-cbcd-42be-9b0f-ba5dac01815f\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-4ltgz" Dec 08 19:41:03 crc kubenswrapper[5125]: I1208 19:41:03.953768 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-rf6k2\"" Dec 08 19:41:03 crc kubenswrapper[5125]: I1208 19:41:03.962102 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-4ltgz" Dec 08 19:41:04 crc kubenswrapper[5125]: I1208 19:41:04.214657 5125 scope.go:117] "RemoveContainer" containerID="82365a532581dbff147b4fecbde17df6ef597ce16c4d2af233e94ba7124566d5" Dec 08 19:41:04 crc kubenswrapper[5125]: I1208 19:41:04.412020 5125 scope.go:117] "RemoveContainer" containerID="13ba0e7f154e48ac828db7f7f5d3fe68ede8bfbdd6535a66efbe94d63500d64e" Dec 08 19:41:04 crc kubenswrapper[5125]: I1208 19:41:04.553364 5125 scope.go:117] "RemoveContainer" containerID="d4a9c850c83720c0b1f939b9bca5ec0651b4f40eb50205066795aee541c91452" Dec 08 19:41:04 crc kubenswrapper[5125]: I1208 19:41:04.809767 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-4grq2"] Dec 08 19:41:04 crc kubenswrapper[5125]: W1208 19:41:04.819069 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69b51416_7923_4adc_b7b5_373297955b92.slice/crio-b9795bef0b147756b0551d67086ec3101f7f4d4bb5d1ffba5647f8429caa4bdb WatchSource:0}: Error finding container b9795bef0b147756b0551d67086ec3101f7f4d4bb5d1ffba5647f8429caa4bdb: Status 404 returned error can't find the container with id b9795bef0b147756b0551d67086ec3101f7f4d4bb5d1ffba5647f8429caa4bdb Dec 08 19:41:04 crc kubenswrapper[5125]: I1208 19:41:04.847418 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-4ltgz"] Dec 08 19:41:04 crc kubenswrapper[5125]: W1208 19:41:04.853832 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1a2ce55_cbcd_42be_9b0f_ba5dac01815f.slice/crio-f2a92e952a5d45399c61f5b7bb0ee7465637cb5edf6365a531921be1eeabf5f3 WatchSource:0}: Error finding container f2a92e952a5d45399c61f5b7bb0ee7465637cb5edf6365a531921be1eeabf5f3: Status 404 returned error can't find the container with id f2a92e952a5d45399c61f5b7bb0ee7465637cb5edf6365a531921be1eeabf5f3 Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.309743 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-6jrg8"] Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.324090 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-6jrg8"] Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.324290 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-6jrg8" Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.327345 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"infrawatch-operators-dockercfg-sjzl6\"" Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.341311 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" podUID="51fe67ff-4e90-4add-8447-58edc3e3d117" containerName="registry" containerID="cri-o://7d7ab317db4ba316a6a3cbedc0934b63bdcc25848b2fb8c144bff1314f1e7532" gracePeriod=30 Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.391542 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtpq5\" (UniqueName: \"kubernetes.io/projected/0245d83a-1d1d-4d90-bde3-a55cd4e060c6-kube-api-access-gtpq5\") pod \"infrawatch-operators-6jrg8\" (UID: \"0245d83a-1d1d-4d90-bde3-a55cd4e060c6\") " pod="service-telemetry/infrawatch-operators-6jrg8" Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.493793 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gtpq5\" (UniqueName: \"kubernetes.io/projected/0245d83a-1d1d-4d90-bde3-a55cd4e060c6-kube-api-access-gtpq5\") pod \"infrawatch-operators-6jrg8\" (UID: \"0245d83a-1d1d-4d90-bde3-a55cd4e060c6\") " pod="service-telemetry/infrawatch-operators-6jrg8" Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.523979 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtpq5\" (UniqueName: \"kubernetes.io/projected/0245d83a-1d1d-4d90-bde3-a55cd4e060c6-kube-api-access-gtpq5\") pod \"infrawatch-operators-6jrg8\" (UID: \"0245d83a-1d1d-4d90-bde3-a55cd4e060c6\") " pod="service-telemetry/infrawatch-operators-6jrg8" Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.648652 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-6jrg8" Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.752701 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.767241 5125 generic.go:358] "Generic (PLEG): container finished" podID="51fe67ff-4e90-4add-8447-58edc3e3d117" containerID="7d7ab317db4ba316a6a3cbedc0934b63bdcc25848b2fb8c144bff1314f1e7532" exitCode=0 Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.862656 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" event={"ID":"51fe67ff-4e90-4add-8447-58edc3e3d117","Type":"ContainerDied","Data":"7d7ab317db4ba316a6a3cbedc0934b63bdcc25848b2fb8c144bff1314f1e7532"} Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.862724 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" event={"ID":"51fe67ff-4e90-4add-8447-58edc3e3d117","Type":"ContainerDied","Data":"b1273f14d623d5b64bf0c546305c9a4caac9b2d8f44108c924b2ca90e85c7ee1"} Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.862739 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-4grq2" event={"ID":"69b51416-7923-4adc-b7b5-373297955b92","Type":"ContainerStarted","Data":"b9795bef0b147756b0551d67086ec3101f7f4d4bb5d1ffba5647f8429caa4bdb"} Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.862754 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-4ltgz" event={"ID":"a1a2ce55-cbcd-42be-9b0f-ba5dac01815f","Type":"ContainerStarted","Data":"f2a92e952a5d45399c61f5b7bb0ee7465637cb5edf6365a531921be1eeabf5f3"} Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.862776 5125 scope.go:117] "RemoveContainer" containerID="7d7ab317db4ba316a6a3cbedc0934b63bdcc25848b2fb8c144bff1314f1e7532" Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.913829 5125 scope.go:117] "RemoveContainer" containerID="7d7ab317db4ba316a6a3cbedc0934b63bdcc25848b2fb8c144bff1314f1e7532" Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.914090 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/51fe67ff-4e90-4add-8447-58edc3e3d117-registry-tls\") pod \"51fe67ff-4e90-4add-8447-58edc3e3d117\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.914135 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fwnk\" (UniqueName: \"kubernetes.io/projected/51fe67ff-4e90-4add-8447-58edc3e3d117-kube-api-access-8fwnk\") pod \"51fe67ff-4e90-4add-8447-58edc3e3d117\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.914170 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/51fe67ff-4e90-4add-8447-58edc3e3d117-installation-pull-secrets\") pod \"51fe67ff-4e90-4add-8447-58edc3e3d117\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.914240 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/51fe67ff-4e90-4add-8447-58edc3e3d117-registry-certificates\") pod \"51fe67ff-4e90-4add-8447-58edc3e3d117\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.914277 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51fe67ff-4e90-4add-8447-58edc3e3d117-trusted-ca\") pod \"51fe67ff-4e90-4add-8447-58edc3e3d117\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.914596 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/51fe67ff-4e90-4add-8447-58edc3e3d117-bound-sa-token\") pod \"51fe67ff-4e90-4add-8447-58edc3e3d117\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.914812 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"51fe67ff-4e90-4add-8447-58edc3e3d117\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.914840 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/51fe67ff-4e90-4add-8447-58edc3e3d117-ca-trust-extracted\") pod \"51fe67ff-4e90-4add-8447-58edc3e3d117\" (UID: \"51fe67ff-4e90-4add-8447-58edc3e3d117\") " Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.915637 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51fe67ff-4e90-4add-8447-58edc3e3d117-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "51fe67ff-4e90-4add-8447-58edc3e3d117" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:41:05 crc kubenswrapper[5125]: E1208 19:41:05.916775 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d7ab317db4ba316a6a3cbedc0934b63bdcc25848b2fb8c144bff1314f1e7532\": container with ID starting with 7d7ab317db4ba316a6a3cbedc0934b63bdcc25848b2fb8c144bff1314f1e7532 not found: ID does not exist" containerID="7d7ab317db4ba316a6a3cbedc0934b63bdcc25848b2fb8c144bff1314f1e7532" Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.916807 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d7ab317db4ba316a6a3cbedc0934b63bdcc25848b2fb8c144bff1314f1e7532"} err="failed to get container status \"7d7ab317db4ba316a6a3cbedc0934b63bdcc25848b2fb8c144bff1314f1e7532\": rpc error: code = NotFound desc = could not find container \"7d7ab317db4ba316a6a3cbedc0934b63bdcc25848b2fb8c144bff1314f1e7532\": container with ID starting with 7d7ab317db4ba316a6a3cbedc0934b63bdcc25848b2fb8c144bff1314f1e7532 not found: ID does not exist" Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.917066 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51fe67ff-4e90-4add-8447-58edc3e3d117-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "51fe67ff-4e90-4add-8447-58edc3e3d117" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.917240 5125 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/51fe67ff-4e90-4add-8447-58edc3e3d117-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.917257 5125 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/51fe67ff-4e90-4add-8447-58edc3e3d117-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.932888 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51fe67ff-4e90-4add-8447-58edc3e3d117-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "51fe67ff-4e90-4add-8447-58edc3e3d117" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.952322 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51fe67ff-4e90-4add-8447-58edc3e3d117-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "51fe67ff-4e90-4add-8447-58edc3e3d117" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.956814 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51fe67ff-4e90-4add-8447-58edc3e3d117-kube-api-access-8fwnk" (OuterVolumeSpecName: "kube-api-access-8fwnk") pod "51fe67ff-4e90-4add-8447-58edc3e3d117" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117"). InnerVolumeSpecName "kube-api-access-8fwnk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.956917 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51fe67ff-4e90-4add-8447-58edc3e3d117-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "51fe67ff-4e90-4add-8447-58edc3e3d117" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.973321 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51fe67ff-4e90-4add-8447-58edc3e3d117-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "51fe67ff-4e90-4add-8447-58edc3e3d117" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:41:05 crc kubenswrapper[5125]: I1208 19:41:05.979073 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "51fe67ff-4e90-4add-8447-58edc3e3d117" (UID: "51fe67ff-4e90-4add-8447-58edc3e3d117"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 08 19:41:06 crc kubenswrapper[5125]: I1208 19:41:06.018871 5125 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/51fe67ff-4e90-4add-8447-58edc3e3d117-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:06 crc kubenswrapper[5125]: I1208 19:41:06.018912 5125 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/51fe67ff-4e90-4add-8447-58edc3e3d117-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:06 crc kubenswrapper[5125]: I1208 19:41:06.018922 5125 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/51fe67ff-4e90-4add-8447-58edc3e3d117-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:06 crc kubenswrapper[5125]: I1208 19:41:06.018930 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8fwnk\" (UniqueName: \"kubernetes.io/projected/51fe67ff-4e90-4add-8447-58edc3e3d117-kube-api-access-8fwnk\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:06 crc kubenswrapper[5125]: I1208 19:41:06.018941 5125 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/51fe67ff-4e90-4add-8447-58edc3e3d117-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:06 crc kubenswrapper[5125]: W1208 19:41:06.239254 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0245d83a_1d1d_4d90_bde3_a55cd4e060c6.slice/crio-db934f0e69b4c61fd4a9c3461eb973f3e52608b3810c45d572f1ce5030e2225f WatchSource:0}: Error finding container db934f0e69b4c61fd4a9c3461eb973f3e52608b3810c45d572f1ce5030e2225f: Status 404 returned error can't find the container with id db934f0e69b4c61fd4a9c3461eb973f3e52608b3810c45d572f1ce5030e2225f Dec 08 19:41:06 crc kubenswrapper[5125]: I1208 19:41:06.243805 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-6jrg8"] Dec 08 19:41:06 crc kubenswrapper[5125]: I1208 19:41:06.639053 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-68bdb49cbf-dgtjc" Dec 08 19:41:06 crc kubenswrapper[5125]: I1208 19:41:06.864799 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-6jrg8" event={"ID":"0245d83a-1d1d-4d90-bde3-a55cd4e060c6","Type":"ContainerStarted","Data":"db934f0e69b4c61fd4a9c3461eb973f3e52608b3810c45d572f1ce5030e2225f"} Dec 08 19:41:06 crc kubenswrapper[5125]: I1208 19:41:06.866635 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-hgxtj" Dec 08 19:41:06 crc kubenswrapper[5125]: I1208 19:41:06.929748 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-hgxtj"] Dec 08 19:41:06 crc kubenswrapper[5125]: I1208 19:41:06.973735 5125 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-hgxtj"] Dec 08 19:41:07 crc kubenswrapper[5125]: I1208 19:41:07.793599 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51fe67ff-4e90-4add-8447-58edc3e3d117" path="/var/lib/kubelet/pods/51fe67ff-4e90-4add-8447-58edc3e3d117/volumes" Dec 08 19:41:22 crc kubenswrapper[5125]: I1208 19:41:22.116032 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-bpwnk"] Dec 08 19:41:22 crc kubenswrapper[5125]: I1208 19:41:22.117283 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="51fe67ff-4e90-4add-8447-58edc3e3d117" containerName="registry" Dec 08 19:41:22 crc kubenswrapper[5125]: I1208 19:41:22.117295 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="51fe67ff-4e90-4add-8447-58edc3e3d117" containerName="registry" Dec 08 19:41:22 crc kubenswrapper[5125]: I1208 19:41:22.117422 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="51fe67ff-4e90-4add-8447-58edc3e3d117" containerName="registry" Dec 08 19:41:24 crc kubenswrapper[5125]: I1208 19:41:24.390209 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-bpwnk"] Dec 08 19:41:24 crc kubenswrapper[5125]: I1208 19:41:24.390432 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-bpwnk" Dec 08 19:41:24 crc kubenswrapper[5125]: I1208 19:41:24.393009 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-z29c7\"" Dec 08 19:41:24 crc kubenswrapper[5125]: I1208 19:41:24.492645 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/140783b6-c517-42f7-8cb0-dca40f7c6762-bound-sa-token\") pod \"cert-manager-858d87f86b-bpwnk\" (UID: \"140783b6-c517-42f7-8cb0-dca40f7c6762\") " pod="cert-manager/cert-manager-858d87f86b-bpwnk" Dec 08 19:41:24 crc kubenswrapper[5125]: I1208 19:41:24.492729 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg52v\" (UniqueName: \"kubernetes.io/projected/140783b6-c517-42f7-8cb0-dca40f7c6762-kube-api-access-vg52v\") pod \"cert-manager-858d87f86b-bpwnk\" (UID: \"140783b6-c517-42f7-8cb0-dca40f7c6762\") " pod="cert-manager/cert-manager-858d87f86b-bpwnk" Dec 08 19:41:24 crc kubenswrapper[5125]: I1208 19:41:24.594309 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/140783b6-c517-42f7-8cb0-dca40f7c6762-bound-sa-token\") pod \"cert-manager-858d87f86b-bpwnk\" (UID: \"140783b6-c517-42f7-8cb0-dca40f7c6762\") " pod="cert-manager/cert-manager-858d87f86b-bpwnk" Dec 08 19:41:24 crc kubenswrapper[5125]: I1208 19:41:24.594361 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vg52v\" (UniqueName: \"kubernetes.io/projected/140783b6-c517-42f7-8cb0-dca40f7c6762-kube-api-access-vg52v\") pod \"cert-manager-858d87f86b-bpwnk\" (UID: \"140783b6-c517-42f7-8cb0-dca40f7c6762\") " pod="cert-manager/cert-manager-858d87f86b-bpwnk" Dec 08 19:41:24 crc kubenswrapper[5125]: I1208 19:41:24.614848 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/140783b6-c517-42f7-8cb0-dca40f7c6762-bound-sa-token\") pod \"cert-manager-858d87f86b-bpwnk\" (UID: \"140783b6-c517-42f7-8cb0-dca40f7c6762\") " pod="cert-manager/cert-manager-858d87f86b-bpwnk" Dec 08 19:41:24 crc kubenswrapper[5125]: I1208 19:41:24.615076 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg52v\" (UniqueName: \"kubernetes.io/projected/140783b6-c517-42f7-8cb0-dca40f7c6762-kube-api-access-vg52v\") pod \"cert-manager-858d87f86b-bpwnk\" (UID: \"140783b6-c517-42f7-8cb0-dca40f7c6762\") " pod="cert-manager/cert-manager-858d87f86b-bpwnk" Dec 08 19:41:24 crc kubenswrapper[5125]: I1208 19:41:24.711336 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-bpwnk" Dec 08 19:41:28 crc kubenswrapper[5125]: I1208 19:41:28.579028 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-bpwnk"] Dec 08 19:41:28 crc kubenswrapper[5125]: W1208 19:41:28.585399 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod140783b6_c517_42f7_8cb0_dca40f7c6762.slice/crio-cb85be0e6353968fec9970d96a85914fa4d5bd8c8c5af4102082c623a5c6aafd WatchSource:0}: Error finding container cb85be0e6353968fec9970d96a85914fa4d5bd8c8c5af4102082c623a5c6aafd: Status 404 returned error can't find the container with id cb85be0e6353968fec9970d96a85914fa4d5bd8c8c5af4102082c623a5c6aafd Dec 08 19:41:29 crc kubenswrapper[5125]: I1208 19:41:29.039167 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-6jrg8" event={"ID":"0245d83a-1d1d-4d90-bde3-a55cd4e060c6","Type":"ContainerStarted","Data":"0edb0d0525af835bf30d5cea27e4de4f363204e7299e95f07a503e68eaa90d6a"} Dec 08 19:41:29 crc kubenswrapper[5125]: I1208 19:41:29.051437 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-4grq2" event={"ID":"69b51416-7923-4adc-b7b5-373297955b92","Type":"ContainerStarted","Data":"d5c6c6e72b3bae43c75018498f5b87b8abc880d8ea7ef9c8d6950476df1d3c2a"} Dec 08 19:41:29 crc kubenswrapper[5125]: I1208 19:41:29.051630 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-4grq2" Dec 08 19:41:29 crc kubenswrapper[5125]: I1208 19:41:29.059516 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-bpwnk" event={"ID":"140783b6-c517-42f7-8cb0-dca40f7c6762","Type":"ContainerStarted","Data":"cb85be0e6353968fec9970d96a85914fa4d5bd8c8c5af4102082c623a5c6aafd"} Dec 08 19:41:29 crc kubenswrapper[5125]: I1208 19:41:29.061319 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-4ltgz" event={"ID":"a1a2ce55-cbcd-42be-9b0f-ba5dac01815f","Type":"ContainerStarted","Data":"2d0f17c24852dd7c7154d5f5b12996d0b84a02ffa4fa63dc22bf566a5070b5a8"} Dec 08 19:41:29 crc kubenswrapper[5125]: I1208 19:41:29.062954 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-6jrg8" podStartSLOduration=2.064795215 podStartE2EDuration="24.062929426s" podCreationTimestamp="2025-12-08 19:41:05 +0000 UTC" firstStartedPulling="2025-12-08 19:41:06.241452363 +0000 UTC m=+723.011942637" lastFinishedPulling="2025-12-08 19:41:28.239586564 +0000 UTC m=+745.010076848" observedRunningTime="2025-12-08 19:41:29.059245325 +0000 UTC m=+745.829735619" watchObservedRunningTime="2025-12-08 19:41:29.062929426 +0000 UTC m=+745.833419720" Dec 08 19:41:29 crc kubenswrapper[5125]: I1208 19:41:29.099411 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-4grq2" podStartSLOduration=2.686012964 podStartE2EDuration="26.099386361s" podCreationTimestamp="2025-12-08 19:41:03 +0000 UTC" firstStartedPulling="2025-12-08 19:41:04.823185813 +0000 UTC m=+721.593676087" lastFinishedPulling="2025-12-08 19:41:28.23655921 +0000 UTC m=+745.007049484" observedRunningTime="2025-12-08 19:41:29.076401358 +0000 UTC m=+745.846891652" watchObservedRunningTime="2025-12-08 19:41:29.099386361 +0000 UTC m=+745.869876635" Dec 08 19:41:29 crc kubenswrapper[5125]: I1208 19:41:29.103597 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-4ltgz" podStartSLOduration=2.7889600310000002 podStartE2EDuration="26.103576567s" podCreationTimestamp="2025-12-08 19:41:03 +0000 UTC" firstStartedPulling="2025-12-08 19:41:04.857675243 +0000 UTC m=+721.628165527" lastFinishedPulling="2025-12-08 19:41:28.172291769 +0000 UTC m=+744.942782063" observedRunningTime="2025-12-08 19:41:29.098881148 +0000 UTC m=+745.869371432" watchObservedRunningTime="2025-12-08 19:41:29.103576567 +0000 UTC m=+745.874066841" Dec 08 19:41:29 crc kubenswrapper[5125]: I1208 19:41:29.131594 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-bpwnk" podStartSLOduration=7.131570908 podStartE2EDuration="7.131570908s" podCreationTimestamp="2025-12-08 19:41:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:41:29.112061781 +0000 UTC m=+745.882552085" watchObservedRunningTime="2025-12-08 19:41:29.131570908 +0000 UTC m=+745.902061182" Dec 08 19:41:30 crc kubenswrapper[5125]: I1208 19:41:30.069446 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"d329c344-3049-4112-9b20-c096a7dd4ad3","Type":"ContainerStarted","Data":"cb41fd813c842459f31f554b39a0b334869c444917a69d8e51546755055001b4"} Dec 08 19:41:30 crc kubenswrapper[5125]: I1208 19:41:30.070825 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-bpwnk" event={"ID":"140783b6-c517-42f7-8cb0-dca40f7c6762","Type":"ContainerStarted","Data":"aac09eee6eafeed8fde0cb0214c5680991aa3a2816822fb46c3d5527f715fe92"} Dec 08 19:41:30 crc kubenswrapper[5125]: I1208 19:41:30.185810 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 19:41:30 crc kubenswrapper[5125]: I1208 19:41:30.223836 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 19:41:31 crc kubenswrapper[5125]: I1208 19:41:31.079284 5125 generic.go:358] "Generic (PLEG): container finished" podID="d329c344-3049-4112-9b20-c096a7dd4ad3" containerID="cb41fd813c842459f31f554b39a0b334869c444917a69d8e51546755055001b4" exitCode=0 Dec 08 19:41:31 crc kubenswrapper[5125]: I1208 19:41:31.079438 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"d329c344-3049-4112-9b20-c096a7dd4ad3","Type":"ContainerDied","Data":"cb41fd813c842459f31f554b39a0b334869c444917a69d8e51546755055001b4"} Dec 08 19:41:32 crc kubenswrapper[5125]: I1208 19:41:32.089797 5125 generic.go:358] "Generic (PLEG): container finished" podID="d329c344-3049-4112-9b20-c096a7dd4ad3" containerID="48d533ba06c6678b540a9a49719a0e5d96c865cb981f6fdf6e8704f7a6309681" exitCode=0 Dec 08 19:41:32 crc kubenswrapper[5125]: I1208 19:41:32.089890 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"d329c344-3049-4112-9b20-c096a7dd4ad3","Type":"ContainerDied","Data":"48d533ba06c6678b540a9a49719a0e5d96c865cb981f6fdf6e8704f7a6309681"} Dec 08 19:41:33 crc kubenswrapper[5125]: I1208 19:41:33.097254 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"d329c344-3049-4112-9b20-c096a7dd4ad3","Type":"ContainerStarted","Data":"0ce64b9d111e2e39a662162bddb69f6eca53a78a62ded2ff61e4d2501659d9ac"} Dec 08 19:41:33 crc kubenswrapper[5125]: I1208 19:41:33.097558 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:33 crc kubenswrapper[5125]: I1208 19:41:33.126496 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=6.9350957730000005 podStartE2EDuration="37.126478395s" podCreationTimestamp="2025-12-08 19:40:56 +0000 UTC" firstStartedPulling="2025-12-08 19:40:58.203183264 +0000 UTC m=+714.973673538" lastFinishedPulling="2025-12-08 19:41:28.394565886 +0000 UTC m=+745.165056160" observedRunningTime="2025-12-08 19:41:33.1226801 +0000 UTC m=+749.893170394" watchObservedRunningTime="2025-12-08 19:41:33.126478395 +0000 UTC m=+749.896968669" Dec 08 19:41:35 crc kubenswrapper[5125]: I1208 19:41:35.074723 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-4grq2" Dec 08 19:41:35 crc kubenswrapper[5125]: I1208 19:41:35.649467 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-6jrg8" Dec 08 19:41:35 crc kubenswrapper[5125]: I1208 19:41:35.649569 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-6jrg8" Dec 08 19:41:35 crc kubenswrapper[5125]: I1208 19:41:35.674039 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-6jrg8" Dec 08 19:41:36 crc kubenswrapper[5125]: I1208 19:41:36.136947 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-6jrg8" Dec 08 19:41:37 crc kubenswrapper[5125]: I1208 19:41:37.950592 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb768249f8g"] Dec 08 19:41:37 crc kubenswrapper[5125]: I1208 19:41:37.982867 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb768249f8g"] Dec 08 19:41:37 crc kubenswrapper[5125]: I1208 19:41:37.983403 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb768249f8g" Dec 08 19:41:37 crc kubenswrapper[5125]: I1208 19:41:37.992188 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/687e140f-831c-4804-bb3f-d9e10d3a5036-util\") pod \"f308c3282bd783e18badba37dad473f984d0c04be601135745fecb768249f8g\" (UID: \"687e140f-831c-4804-bb3f-d9e10d3a5036\") " pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb768249f8g" Dec 08 19:41:37 crc kubenswrapper[5125]: I1208 19:41:37.992475 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/687e140f-831c-4804-bb3f-d9e10d3a5036-bundle\") pod \"f308c3282bd783e18badba37dad473f984d0c04be601135745fecb768249f8g\" (UID: \"687e140f-831c-4804-bb3f-d9e10d3a5036\") " pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb768249f8g" Dec 08 19:41:37 crc kubenswrapper[5125]: I1208 19:41:37.992574 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcp5k\" (UniqueName: \"kubernetes.io/projected/687e140f-831c-4804-bb3f-d9e10d3a5036-kube-api-access-gcp5k\") pod \"f308c3282bd783e18badba37dad473f984d0c04be601135745fecb768249f8g\" (UID: \"687e140f-831c-4804-bb3f-d9e10d3a5036\") " pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb768249f8g" Dec 08 19:41:38 crc kubenswrapper[5125]: I1208 19:41:38.093664 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/687e140f-831c-4804-bb3f-d9e10d3a5036-util\") pod \"f308c3282bd783e18badba37dad473f984d0c04be601135745fecb768249f8g\" (UID: \"687e140f-831c-4804-bb3f-d9e10d3a5036\") " pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb768249f8g" Dec 08 19:41:38 crc kubenswrapper[5125]: I1208 19:41:38.093922 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/687e140f-831c-4804-bb3f-d9e10d3a5036-bundle\") pod \"f308c3282bd783e18badba37dad473f984d0c04be601135745fecb768249f8g\" (UID: \"687e140f-831c-4804-bb3f-d9e10d3a5036\") " pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb768249f8g" Dec 08 19:41:38 crc kubenswrapper[5125]: I1208 19:41:38.094024 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gcp5k\" (UniqueName: \"kubernetes.io/projected/687e140f-831c-4804-bb3f-d9e10d3a5036-kube-api-access-gcp5k\") pod \"f308c3282bd783e18badba37dad473f984d0c04be601135745fecb768249f8g\" (UID: \"687e140f-831c-4804-bb3f-d9e10d3a5036\") " pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb768249f8g" Dec 08 19:41:38 crc kubenswrapper[5125]: I1208 19:41:38.094294 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/687e140f-831c-4804-bb3f-d9e10d3a5036-util\") pod \"f308c3282bd783e18badba37dad473f984d0c04be601135745fecb768249f8g\" (UID: \"687e140f-831c-4804-bb3f-d9e10d3a5036\") " pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb768249f8g" Dec 08 19:41:38 crc kubenswrapper[5125]: I1208 19:41:38.094555 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/687e140f-831c-4804-bb3f-d9e10d3a5036-bundle\") pod \"f308c3282bd783e18badba37dad473f984d0c04be601135745fecb768249f8g\" (UID: \"687e140f-831c-4804-bb3f-d9e10d3a5036\") " pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb768249f8g" Dec 08 19:41:38 crc kubenswrapper[5125]: I1208 19:41:38.126712 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcp5k\" (UniqueName: \"kubernetes.io/projected/687e140f-831c-4804-bb3f-d9e10d3a5036-kube-api-access-gcp5k\") pod \"f308c3282bd783e18badba37dad473f984d0c04be601135745fecb768249f8g\" (UID: \"687e140f-831c-4804-bb3f-d9e10d3a5036\") " pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb768249f8g" Dec 08 19:41:38 crc kubenswrapper[5125]: I1208 19:41:38.302701 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb768249f8g" Dec 08 19:41:38 crc kubenswrapper[5125]: I1208 19:41:38.712935 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb768249f8g"] Dec 08 19:41:38 crc kubenswrapper[5125]: I1208 19:41:38.759261 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsz45x"] Dec 08 19:41:38 crc kubenswrapper[5125]: I1208 19:41:38.764589 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsz45x" Dec 08 19:41:38 crc kubenswrapper[5125]: I1208 19:41:38.766896 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 08 19:41:38 crc kubenswrapper[5125]: I1208 19:41:38.768036 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsz45x"] Dec 08 19:41:38 crc kubenswrapper[5125]: I1208 19:41:38.802762 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvlnb\" (UniqueName: \"kubernetes.io/projected/7c28f964-8b5e-4a1c-b85d-0e305a398a1f-kube-api-access-lvlnb\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsz45x\" (UID: \"7c28f964-8b5e-4a1c-b85d-0e305a398a1f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsz45x" Dec 08 19:41:38 crc kubenswrapper[5125]: I1208 19:41:38.803161 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7c28f964-8b5e-4a1c-b85d-0e305a398a1f-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsz45x\" (UID: \"7c28f964-8b5e-4a1c-b85d-0e305a398a1f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsz45x" Dec 08 19:41:38 crc kubenswrapper[5125]: I1208 19:41:38.803232 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7c28f964-8b5e-4a1c-b85d-0e305a398a1f-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsz45x\" (UID: \"7c28f964-8b5e-4a1c-b85d-0e305a398a1f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsz45x" Dec 08 19:41:38 crc kubenswrapper[5125]: I1208 19:41:38.904793 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lvlnb\" (UniqueName: \"kubernetes.io/projected/7c28f964-8b5e-4a1c-b85d-0e305a398a1f-kube-api-access-lvlnb\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsz45x\" (UID: \"7c28f964-8b5e-4a1c-b85d-0e305a398a1f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsz45x" Dec 08 19:41:38 crc kubenswrapper[5125]: I1208 19:41:38.904861 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7c28f964-8b5e-4a1c-b85d-0e305a398a1f-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsz45x\" (UID: \"7c28f964-8b5e-4a1c-b85d-0e305a398a1f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsz45x" Dec 08 19:41:38 crc kubenswrapper[5125]: I1208 19:41:38.905024 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7c28f964-8b5e-4a1c-b85d-0e305a398a1f-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsz45x\" (UID: \"7c28f964-8b5e-4a1c-b85d-0e305a398a1f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsz45x" Dec 08 19:41:38 crc kubenswrapper[5125]: I1208 19:41:38.905601 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7c28f964-8b5e-4a1c-b85d-0e305a398a1f-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsz45x\" (UID: \"7c28f964-8b5e-4a1c-b85d-0e305a398a1f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsz45x" Dec 08 19:41:38 crc kubenswrapper[5125]: I1208 19:41:38.905754 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7c28f964-8b5e-4a1c-b85d-0e305a398a1f-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsz45x\" (UID: \"7c28f964-8b5e-4a1c-b85d-0e305a398a1f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsz45x" Dec 08 19:41:38 crc kubenswrapper[5125]: I1208 19:41:38.931171 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvlnb\" (UniqueName: \"kubernetes.io/projected/7c28f964-8b5e-4a1c-b85d-0e305a398a1f-kube-api-access-lvlnb\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsz45x\" (UID: \"7c28f964-8b5e-4a1c-b85d-0e305a398a1f\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsz45x" Dec 08 19:41:39 crc kubenswrapper[5125]: I1208 19:41:39.113911 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsz45x" Dec 08 19:41:39 crc kubenswrapper[5125]: I1208 19:41:39.138125 5125 generic.go:358] "Generic (PLEG): container finished" podID="687e140f-831c-4804-bb3f-d9e10d3a5036" containerID="1bd5d10346e38ca5cca3dcebe1ff3cea82b9c171920b5d921da9fdc5ed50c015" exitCode=0 Dec 08 19:41:39 crc kubenswrapper[5125]: I1208 19:41:39.138225 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb768249f8g" event={"ID":"687e140f-831c-4804-bb3f-d9e10d3a5036","Type":"ContainerDied","Data":"1bd5d10346e38ca5cca3dcebe1ff3cea82b9c171920b5d921da9fdc5ed50c015"} Dec 08 19:41:39 crc kubenswrapper[5125]: I1208 19:41:39.138267 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb768249f8g" event={"ID":"687e140f-831c-4804-bb3f-d9e10d3a5036","Type":"ContainerStarted","Data":"145aca1ea010e4e53b608f76fd3cf1bf335fabff2e1594acbb441f37de4d3f9b"} Dec 08 19:41:39 crc kubenswrapper[5125]: I1208 19:41:39.310354 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsz45x"] Dec 08 19:41:39 crc kubenswrapper[5125]: I1208 19:41:39.752334 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113jwmch"] Dec 08 19:41:39 crc kubenswrapper[5125]: I1208 19:41:39.810068 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113jwmch" Dec 08 19:41:39 crc kubenswrapper[5125]: I1208 19:41:39.818315 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113jwmch"] Dec 08 19:41:39 crc kubenswrapper[5125]: I1208 19:41:39.921579 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97bb7598-b992-43ea-bd3c-71ca692ddebb-util\") pod \"36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113jwmch\" (UID: \"97bb7598-b992-43ea-bd3c-71ca692ddebb\") " pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113jwmch" Dec 08 19:41:39 crc kubenswrapper[5125]: I1208 19:41:39.921646 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97bb7598-b992-43ea-bd3c-71ca692ddebb-bundle\") pod \"36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113jwmch\" (UID: \"97bb7598-b992-43ea-bd3c-71ca692ddebb\") " pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113jwmch" Dec 08 19:41:39 crc kubenswrapper[5125]: I1208 19:41:39.921766 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmsft\" (UniqueName: \"kubernetes.io/projected/97bb7598-b992-43ea-bd3c-71ca692ddebb-kube-api-access-kmsft\") pod \"36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113jwmch\" (UID: \"97bb7598-b992-43ea-bd3c-71ca692ddebb\") " pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113jwmch" Dec 08 19:41:40 crc kubenswrapper[5125]: I1208 19:41:40.023205 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97bb7598-b992-43ea-bd3c-71ca692ddebb-bundle\") pod \"36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113jwmch\" (UID: \"97bb7598-b992-43ea-bd3c-71ca692ddebb\") " pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113jwmch" Dec 08 19:41:40 crc kubenswrapper[5125]: I1208 19:41:40.023370 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kmsft\" (UniqueName: \"kubernetes.io/projected/97bb7598-b992-43ea-bd3c-71ca692ddebb-kube-api-access-kmsft\") pod \"36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113jwmch\" (UID: \"97bb7598-b992-43ea-bd3c-71ca692ddebb\") " pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113jwmch" Dec 08 19:41:40 crc kubenswrapper[5125]: I1208 19:41:40.023416 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97bb7598-b992-43ea-bd3c-71ca692ddebb-util\") pod \"36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113jwmch\" (UID: \"97bb7598-b992-43ea-bd3c-71ca692ddebb\") " pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113jwmch" Dec 08 19:41:40 crc kubenswrapper[5125]: I1208 19:41:40.024032 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97bb7598-b992-43ea-bd3c-71ca692ddebb-util\") pod \"36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113jwmch\" (UID: \"97bb7598-b992-43ea-bd3c-71ca692ddebb\") " pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113jwmch" Dec 08 19:41:40 crc kubenswrapper[5125]: I1208 19:41:40.024337 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97bb7598-b992-43ea-bd3c-71ca692ddebb-bundle\") pod \"36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113jwmch\" (UID: \"97bb7598-b992-43ea-bd3c-71ca692ddebb\") " pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113jwmch" Dec 08 19:41:40 crc kubenswrapper[5125]: I1208 19:41:40.056691 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmsft\" (UniqueName: \"kubernetes.io/projected/97bb7598-b992-43ea-bd3c-71ca692ddebb-kube-api-access-kmsft\") pod \"36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113jwmch\" (UID: \"97bb7598-b992-43ea-bd3c-71ca692ddebb\") " pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113jwmch" Dec 08 19:41:40 crc kubenswrapper[5125]: I1208 19:41:40.123217 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113jwmch" Dec 08 19:41:40 crc kubenswrapper[5125]: I1208 19:41:40.160833 5125 generic.go:358] "Generic (PLEG): container finished" podID="7c28f964-8b5e-4a1c-b85d-0e305a398a1f" containerID="d905f7ed829a6ae7d05e725c21e5855ed8065e075277b75775ef0bddf6446ca4" exitCode=0 Dec 08 19:41:40 crc kubenswrapper[5125]: I1208 19:41:40.160894 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsz45x" event={"ID":"7c28f964-8b5e-4a1c-b85d-0e305a398a1f","Type":"ContainerDied","Data":"d905f7ed829a6ae7d05e725c21e5855ed8065e075277b75775ef0bddf6446ca4"} Dec 08 19:41:40 crc kubenswrapper[5125]: I1208 19:41:40.160927 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsz45x" event={"ID":"7c28f964-8b5e-4a1c-b85d-0e305a398a1f","Type":"ContainerStarted","Data":"2bd8fd5226c5b69989962e35780c83d98507402f89b88b7a421ba0e90ad790f6"} Dec 08 19:41:40 crc kubenswrapper[5125]: I1208 19:41:40.329488 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113jwmch"] Dec 08 19:41:40 crc kubenswrapper[5125]: W1208 19:41:40.341785 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97bb7598_b992_43ea_bd3c_71ca692ddebb.slice/crio-8830a34f9ec356bb71dc2d62f68bd244a12e6be1cd5c5a80db5837ed6d2d222e WatchSource:0}: Error finding container 8830a34f9ec356bb71dc2d62f68bd244a12e6be1cd5c5a80db5837ed6d2d222e: Status 404 returned error can't find the container with id 8830a34f9ec356bb71dc2d62f68bd244a12e6be1cd5c5a80db5837ed6d2d222e Dec 08 19:41:41 crc kubenswrapper[5125]: I1208 19:41:41.169982 5125 generic.go:358] "Generic (PLEG): container finished" podID="687e140f-831c-4804-bb3f-d9e10d3a5036" containerID="8c36121bb61db88558f6e6f33072cf67f2343f04aeaf9e75bdba9af56e1472df" exitCode=0 Dec 08 19:41:41 crc kubenswrapper[5125]: I1208 19:41:41.170054 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb768249f8g" event={"ID":"687e140f-831c-4804-bb3f-d9e10d3a5036","Type":"ContainerDied","Data":"8c36121bb61db88558f6e6f33072cf67f2343f04aeaf9e75bdba9af56e1472df"} Dec 08 19:41:41 crc kubenswrapper[5125]: I1208 19:41:41.173734 5125 generic.go:358] "Generic (PLEG): container finished" podID="97bb7598-b992-43ea-bd3c-71ca692ddebb" containerID="b72b6489e5f2bc537c5be5e9fcb649c991f56836646d2d6c988fcab638d59124" exitCode=0 Dec 08 19:41:41 crc kubenswrapper[5125]: I1208 19:41:41.173834 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113jwmch" event={"ID":"97bb7598-b992-43ea-bd3c-71ca692ddebb","Type":"ContainerDied","Data":"b72b6489e5f2bc537c5be5e9fcb649c991f56836646d2d6c988fcab638d59124"} Dec 08 19:41:41 crc kubenswrapper[5125]: I1208 19:41:41.173880 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113jwmch" event={"ID":"97bb7598-b992-43ea-bd3c-71ca692ddebb","Type":"ContainerStarted","Data":"8830a34f9ec356bb71dc2d62f68bd244a12e6be1cd5c5a80db5837ed6d2d222e"} Dec 08 19:41:41 crc kubenswrapper[5125]: I1208 19:41:41.176121 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsz45x" event={"ID":"7c28f964-8b5e-4a1c-b85d-0e305a398a1f","Type":"ContainerStarted","Data":"35d8a2c5af1b4e969d327ca9dfb3a18d5a75f427e598af6b358835fa9d1f8720"} Dec 08 19:41:42 crc kubenswrapper[5125]: I1208 19:41:42.182439 5125 generic.go:358] "Generic (PLEG): container finished" podID="7c28f964-8b5e-4a1c-b85d-0e305a398a1f" containerID="35d8a2c5af1b4e969d327ca9dfb3a18d5a75f427e598af6b358835fa9d1f8720" exitCode=0 Dec 08 19:41:42 crc kubenswrapper[5125]: I1208 19:41:42.182489 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsz45x" event={"ID":"7c28f964-8b5e-4a1c-b85d-0e305a398a1f","Type":"ContainerDied","Data":"35d8a2c5af1b4e969d327ca9dfb3a18d5a75f427e598af6b358835fa9d1f8720"} Dec 08 19:41:42 crc kubenswrapper[5125]: I1208 19:41:42.185751 5125 generic.go:358] "Generic (PLEG): container finished" podID="687e140f-831c-4804-bb3f-d9e10d3a5036" containerID="fc1fcb454d8017715d7f33e9b35a116eda27f2aa5032e0c032513161016e5de2" exitCode=0 Dec 08 19:41:42 crc kubenswrapper[5125]: I1208 19:41:42.185883 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb768249f8g" event={"ID":"687e140f-831c-4804-bb3f-d9e10d3a5036","Type":"ContainerDied","Data":"fc1fcb454d8017715d7f33e9b35a116eda27f2aa5032e0c032513161016e5de2"} Dec 08 19:41:42 crc kubenswrapper[5125]: I1208 19:41:42.188797 5125 generic.go:358] "Generic (PLEG): container finished" podID="97bb7598-b992-43ea-bd3c-71ca692ddebb" containerID="0be6611acb4d095d844bfec42f07696fd757d4da037817d71a030c56a0a04861" exitCode=0 Dec 08 19:41:42 crc kubenswrapper[5125]: I1208 19:41:42.188842 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113jwmch" event={"ID":"97bb7598-b992-43ea-bd3c-71ca692ddebb","Type":"ContainerDied","Data":"0be6611acb4d095d844bfec42f07696fd757d4da037817d71a030c56a0a04861"} Dec 08 19:41:42 crc kubenswrapper[5125]: E1208 19:41:42.486051 5125 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c28f964_8b5e_4a1c_b85d_0e305a398a1f.slice/crio-9f9ec2b992ae4deaa7525175dec3958313571ef002edfdccf7e8257b3691d51f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c28f964_8b5e_4a1c_b85d_0e305a398a1f.slice/crio-conmon-9f9ec2b992ae4deaa7525175dec3958313571ef002edfdccf7e8257b3691d51f.scope\": RecentStats: unable to find data in memory cache]" Dec 08 19:41:43 crc kubenswrapper[5125]: I1208 19:41:43.196824 5125 generic.go:358] "Generic (PLEG): container finished" podID="7c28f964-8b5e-4a1c-b85d-0e305a398a1f" containerID="9f9ec2b992ae4deaa7525175dec3958313571ef002edfdccf7e8257b3691d51f" exitCode=0 Dec 08 19:41:43 crc kubenswrapper[5125]: I1208 19:41:43.197324 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsz45x" event={"ID":"7c28f964-8b5e-4a1c-b85d-0e305a398a1f","Type":"ContainerDied","Data":"9f9ec2b992ae4deaa7525175dec3958313571ef002edfdccf7e8257b3691d51f"} Dec 08 19:41:43 crc kubenswrapper[5125]: I1208 19:41:43.200503 5125 generic.go:358] "Generic (PLEG): container finished" podID="97bb7598-b992-43ea-bd3c-71ca692ddebb" containerID="476b7601786adad590591d7e11702fbc7bbb05e4f5dfb69efcac1412d7fb26b6" exitCode=0 Dec 08 19:41:43 crc kubenswrapper[5125]: I1208 19:41:43.200980 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113jwmch" event={"ID":"97bb7598-b992-43ea-bd3c-71ca692ddebb","Type":"ContainerDied","Data":"476b7601786adad590591d7e11702fbc7bbb05e4f5dfb69efcac1412d7fb26b6"} Dec 08 19:41:43 crc kubenswrapper[5125]: I1208 19:41:43.449158 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb768249f8g" Dec 08 19:41:43 crc kubenswrapper[5125]: I1208 19:41:43.572268 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcp5k\" (UniqueName: \"kubernetes.io/projected/687e140f-831c-4804-bb3f-d9e10d3a5036-kube-api-access-gcp5k\") pod \"687e140f-831c-4804-bb3f-d9e10d3a5036\" (UID: \"687e140f-831c-4804-bb3f-d9e10d3a5036\") " Dec 08 19:41:43 crc kubenswrapper[5125]: I1208 19:41:43.572514 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/687e140f-831c-4804-bb3f-d9e10d3a5036-util\") pod \"687e140f-831c-4804-bb3f-d9e10d3a5036\" (UID: \"687e140f-831c-4804-bb3f-d9e10d3a5036\") " Dec 08 19:41:43 crc kubenswrapper[5125]: I1208 19:41:43.572543 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/687e140f-831c-4804-bb3f-d9e10d3a5036-bundle\") pod \"687e140f-831c-4804-bb3f-d9e10d3a5036\" (UID: \"687e140f-831c-4804-bb3f-d9e10d3a5036\") " Dec 08 19:41:43 crc kubenswrapper[5125]: I1208 19:41:43.573588 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/687e140f-831c-4804-bb3f-d9e10d3a5036-bundle" (OuterVolumeSpecName: "bundle") pod "687e140f-831c-4804-bb3f-d9e10d3a5036" (UID: "687e140f-831c-4804-bb3f-d9e10d3a5036"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:41:43 crc kubenswrapper[5125]: I1208 19:41:43.581764 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/687e140f-831c-4804-bb3f-d9e10d3a5036-kube-api-access-gcp5k" (OuterVolumeSpecName: "kube-api-access-gcp5k") pod "687e140f-831c-4804-bb3f-d9e10d3a5036" (UID: "687e140f-831c-4804-bb3f-d9e10d3a5036"). InnerVolumeSpecName "kube-api-access-gcp5k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:41:43 crc kubenswrapper[5125]: I1208 19:41:43.593212 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/687e140f-831c-4804-bb3f-d9e10d3a5036-util" (OuterVolumeSpecName: "util") pod "687e140f-831c-4804-bb3f-d9e10d3a5036" (UID: "687e140f-831c-4804-bb3f-d9e10d3a5036"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:41:43 crc kubenswrapper[5125]: I1208 19:41:43.674104 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gcp5k\" (UniqueName: \"kubernetes.io/projected/687e140f-831c-4804-bb3f-d9e10d3a5036-kube-api-access-gcp5k\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:43 crc kubenswrapper[5125]: I1208 19:41:43.674164 5125 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/687e140f-831c-4804-bb3f-d9e10d3a5036-util\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:43 crc kubenswrapper[5125]: I1208 19:41:43.674175 5125 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/687e140f-831c-4804-bb3f-d9e10d3a5036-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:43 crc kubenswrapper[5125]: I1208 19:41:43.711358 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t5c6h"] Dec 08 19:41:43 crc kubenswrapper[5125]: I1208 19:41:43.712285 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="687e140f-831c-4804-bb3f-d9e10d3a5036" containerName="extract" Dec 08 19:41:43 crc kubenswrapper[5125]: I1208 19:41:43.712307 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e140f-831c-4804-bb3f-d9e10d3a5036" containerName="extract" Dec 08 19:41:43 crc kubenswrapper[5125]: I1208 19:41:43.712339 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="687e140f-831c-4804-bb3f-d9e10d3a5036" containerName="pull" Dec 08 19:41:43 crc kubenswrapper[5125]: I1208 19:41:43.712347 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e140f-831c-4804-bb3f-d9e10d3a5036" containerName="pull" Dec 08 19:41:43 crc kubenswrapper[5125]: I1208 19:41:43.712368 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="687e140f-831c-4804-bb3f-d9e10d3a5036" containerName="util" Dec 08 19:41:43 crc kubenswrapper[5125]: I1208 19:41:43.712376 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e140f-831c-4804-bb3f-d9e10d3a5036" containerName="util" Dec 08 19:41:43 crc kubenswrapper[5125]: I1208 19:41:43.712498 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="687e140f-831c-4804-bb3f-d9e10d3a5036" containerName="extract" Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.131701 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t5c6h"] Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.131935 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5c6h" Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.179325 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2v84\" (UniqueName: \"kubernetes.io/projected/b1940f9f-2825-4e5f-a7b5-1c3ec74450de-kube-api-access-f2v84\") pod \"redhat-operators-t5c6h\" (UID: \"b1940f9f-2825-4e5f-a7b5-1c3ec74450de\") " pod="openshift-marketplace/redhat-operators-t5c6h" Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.179449 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1940f9f-2825-4e5f-a7b5-1c3ec74450de-catalog-content\") pod \"redhat-operators-t5c6h\" (UID: \"b1940f9f-2825-4e5f-a7b5-1c3ec74450de\") " pod="openshift-marketplace/redhat-operators-t5c6h" Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.179563 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1940f9f-2825-4e5f-a7b5-1c3ec74450de-utilities\") pod \"redhat-operators-t5c6h\" (UID: \"b1940f9f-2825-4e5f-a7b5-1c3ec74450de\") " pod="openshift-marketplace/redhat-operators-t5c6h" Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.204631 5125 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="d329c344-3049-4112-9b20-c096a7dd4ad3" containerName="elasticsearch" probeResult="failure" output=< Dec 08 19:41:44 crc kubenswrapper[5125]: {"timestamp": "2025-12-08T19:41:44+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 08 19:41:44 crc kubenswrapper[5125]: > Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.208568 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb768249f8g" Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.209096 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb768249f8g" event={"ID":"687e140f-831c-4804-bb3f-d9e10d3a5036","Type":"ContainerDied","Data":"145aca1ea010e4e53b608f76fd3cf1bf335fabff2e1594acbb441f37de4d3f9b"} Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.209120 5125 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="145aca1ea010e4e53b608f76fd3cf1bf335fabff2e1594acbb441f37de4d3f9b" Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.283294 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1940f9f-2825-4e5f-a7b5-1c3ec74450de-catalog-content\") pod \"redhat-operators-t5c6h\" (UID: \"b1940f9f-2825-4e5f-a7b5-1c3ec74450de\") " pod="openshift-marketplace/redhat-operators-t5c6h" Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.283414 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1940f9f-2825-4e5f-a7b5-1c3ec74450de-utilities\") pod \"redhat-operators-t5c6h\" (UID: \"b1940f9f-2825-4e5f-a7b5-1c3ec74450de\") " pod="openshift-marketplace/redhat-operators-t5c6h" Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.283456 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f2v84\" (UniqueName: \"kubernetes.io/projected/b1940f9f-2825-4e5f-a7b5-1c3ec74450de-kube-api-access-f2v84\") pod \"redhat-operators-t5c6h\" (UID: \"b1940f9f-2825-4e5f-a7b5-1c3ec74450de\") " pod="openshift-marketplace/redhat-operators-t5c6h" Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.284413 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1940f9f-2825-4e5f-a7b5-1c3ec74450de-catalog-content\") pod \"redhat-operators-t5c6h\" (UID: \"b1940f9f-2825-4e5f-a7b5-1c3ec74450de\") " pod="openshift-marketplace/redhat-operators-t5c6h" Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.284792 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1940f9f-2825-4e5f-a7b5-1c3ec74450de-utilities\") pod \"redhat-operators-t5c6h\" (UID: \"b1940f9f-2825-4e5f-a7b5-1c3ec74450de\") " pod="openshift-marketplace/redhat-operators-t5c6h" Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.307634 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2v84\" (UniqueName: \"kubernetes.io/projected/b1940f9f-2825-4e5f-a7b5-1c3ec74450de-kube-api-access-f2v84\") pod \"redhat-operators-t5c6h\" (UID: \"b1940f9f-2825-4e5f-a7b5-1c3ec74450de\") " pod="openshift-marketplace/redhat-operators-t5c6h" Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.444067 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5c6h" Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.451997 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsz45x" Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.486321 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvlnb\" (UniqueName: \"kubernetes.io/projected/7c28f964-8b5e-4a1c-b85d-0e305a398a1f-kube-api-access-lvlnb\") pod \"7c28f964-8b5e-4a1c-b85d-0e305a398a1f\" (UID: \"7c28f964-8b5e-4a1c-b85d-0e305a398a1f\") " Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.486617 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7c28f964-8b5e-4a1c-b85d-0e305a398a1f-util\") pod \"7c28f964-8b5e-4a1c-b85d-0e305a398a1f\" (UID: \"7c28f964-8b5e-4a1c-b85d-0e305a398a1f\") " Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.486655 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7c28f964-8b5e-4a1c-b85d-0e305a398a1f-bundle\") pod \"7c28f964-8b5e-4a1c-b85d-0e305a398a1f\" (UID: \"7c28f964-8b5e-4a1c-b85d-0e305a398a1f\") " Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.487527 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c28f964-8b5e-4a1c-b85d-0e305a398a1f-bundle" (OuterVolumeSpecName: "bundle") pod "7c28f964-8b5e-4a1c-b85d-0e305a398a1f" (UID: "7c28f964-8b5e-4a1c-b85d-0e305a398a1f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.491387 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c28f964-8b5e-4a1c-b85d-0e305a398a1f-kube-api-access-lvlnb" (OuterVolumeSpecName: "kube-api-access-lvlnb") pod "7c28f964-8b5e-4a1c-b85d-0e305a398a1f" (UID: "7c28f964-8b5e-4a1c-b85d-0e305a398a1f"). InnerVolumeSpecName "kube-api-access-lvlnb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.502150 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c28f964-8b5e-4a1c-b85d-0e305a398a1f-util" (OuterVolumeSpecName: "util") pod "7c28f964-8b5e-4a1c-b85d-0e305a398a1f" (UID: "7c28f964-8b5e-4a1c-b85d-0e305a398a1f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.529970 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113jwmch" Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.587980 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97bb7598-b992-43ea-bd3c-71ca692ddebb-util\") pod \"97bb7598-b992-43ea-bd3c-71ca692ddebb\" (UID: \"97bb7598-b992-43ea-bd3c-71ca692ddebb\") " Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.588051 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmsft\" (UniqueName: \"kubernetes.io/projected/97bb7598-b992-43ea-bd3c-71ca692ddebb-kube-api-access-kmsft\") pod \"97bb7598-b992-43ea-bd3c-71ca692ddebb\" (UID: \"97bb7598-b992-43ea-bd3c-71ca692ddebb\") " Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.588068 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97bb7598-b992-43ea-bd3c-71ca692ddebb-bundle\") pod \"97bb7598-b992-43ea-bd3c-71ca692ddebb\" (UID: \"97bb7598-b992-43ea-bd3c-71ca692ddebb\") " Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.588276 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lvlnb\" (UniqueName: \"kubernetes.io/projected/7c28f964-8b5e-4a1c-b85d-0e305a398a1f-kube-api-access-lvlnb\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.588289 5125 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7c28f964-8b5e-4a1c-b85d-0e305a398a1f-util\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.588298 5125 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7c28f964-8b5e-4a1c-b85d-0e305a398a1f-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.588584 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97bb7598-b992-43ea-bd3c-71ca692ddebb-bundle" (OuterVolumeSpecName: "bundle") pod "97bb7598-b992-43ea-bd3c-71ca692ddebb" (UID: "97bb7598-b992-43ea-bd3c-71ca692ddebb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.594207 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97bb7598-b992-43ea-bd3c-71ca692ddebb-kube-api-access-kmsft" (OuterVolumeSpecName: "kube-api-access-kmsft") pod "97bb7598-b992-43ea-bd3c-71ca692ddebb" (UID: "97bb7598-b992-43ea-bd3c-71ca692ddebb"). InnerVolumeSpecName "kube-api-access-kmsft". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.605348 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97bb7598-b992-43ea-bd3c-71ca692ddebb-util" (OuterVolumeSpecName: "util") pod "97bb7598-b992-43ea-bd3c-71ca692ddebb" (UID: "97bb7598-b992-43ea-bd3c-71ca692ddebb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.689993 5125 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97bb7598-b992-43ea-bd3c-71ca692ddebb-util\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.690034 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kmsft\" (UniqueName: \"kubernetes.io/projected/97bb7598-b992-43ea-bd3c-71ca692ddebb-kube-api-access-kmsft\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.690045 5125 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97bb7598-b992-43ea-bd3c-71ca692ddebb-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:41:44 crc kubenswrapper[5125]: I1208 19:41:44.711369 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t5c6h"] Dec 08 19:41:44 crc kubenswrapper[5125]: W1208 19:41:44.717436 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1940f9f_2825_4e5f_a7b5_1c3ec74450de.slice/crio-9db2229158f8cfaf4d1a47efe99778fd0583633e8ac894a45fbcc1adc6a50225 WatchSource:0}: Error finding container 9db2229158f8cfaf4d1a47efe99778fd0583633e8ac894a45fbcc1adc6a50225: Status 404 returned error can't find the container with id 9db2229158f8cfaf4d1a47efe99778fd0583633e8ac894a45fbcc1adc6a50225 Dec 08 19:41:45 crc kubenswrapper[5125]: I1208 19:41:45.216342 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5c6h" event={"ID":"b1940f9f-2825-4e5f-a7b5-1c3ec74450de","Type":"ContainerStarted","Data":"9db2229158f8cfaf4d1a47efe99778fd0583633e8ac894a45fbcc1adc6a50225"} Dec 08 19:41:45 crc kubenswrapper[5125]: I1208 19:41:45.219400 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113jwmch" event={"ID":"97bb7598-b992-43ea-bd3c-71ca692ddebb","Type":"ContainerDied","Data":"8830a34f9ec356bb71dc2d62f68bd244a12e6be1cd5c5a80db5837ed6d2d222e"} Dec 08 19:41:45 crc kubenswrapper[5125]: I1208 19:41:45.219428 5125 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8830a34f9ec356bb71dc2d62f68bd244a12e6be1cd5c5a80db5837ed6d2d222e" Dec 08 19:41:45 crc kubenswrapper[5125]: I1208 19:41:45.219450 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113jwmch" Dec 08 19:41:45 crc kubenswrapper[5125]: I1208 19:41:45.222683 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsz45x" event={"ID":"7c28f964-8b5e-4a1c-b85d-0e305a398a1f","Type":"ContainerDied","Data":"2bd8fd5226c5b69989962e35780c83d98507402f89b88b7a421ba0e90ad790f6"} Dec 08 19:41:45 crc kubenswrapper[5125]: I1208 19:41:45.222704 5125 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bd8fd5226c5b69989962e35780c83d98507402f89b88b7a421ba0e90ad790f6" Dec 08 19:41:45 crc kubenswrapper[5125]: I1208 19:41:45.222781 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fsz45x" Dec 08 19:41:46 crc kubenswrapper[5125]: I1208 19:41:46.230317 5125 generic.go:358] "Generic (PLEG): container finished" podID="b1940f9f-2825-4e5f-a7b5-1c3ec74450de" containerID="e58530b9bf5f8828dcc1dc4ab8e79fbd1f76461d0990acb8d36418297e9a293f" exitCode=0 Dec 08 19:41:46 crc kubenswrapper[5125]: I1208 19:41:46.230443 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5c6h" event={"ID":"b1940f9f-2825-4e5f-a7b5-1c3ec74450de","Type":"ContainerDied","Data":"e58530b9bf5f8828dcc1dc4ab8e79fbd1f76461d0990acb8d36418297e9a293f"} Dec 08 19:41:47 crc kubenswrapper[5125]: I1208 19:41:47.242576 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5c6h" event={"ID":"b1940f9f-2825-4e5f-a7b5-1c3ec74450de","Type":"ContainerStarted","Data":"9a3e300929e19dac671dcbdd8dcbcb9b092cda296ed12cbc4db622b83d1f0c5c"} Dec 08 19:41:48 crc kubenswrapper[5125]: I1208 19:41:48.251002 5125 generic.go:358] "Generic (PLEG): container finished" podID="b1940f9f-2825-4e5f-a7b5-1c3ec74450de" containerID="9a3e300929e19dac671dcbdd8dcbcb9b092cda296ed12cbc4db622b83d1f0c5c" exitCode=0 Dec 08 19:41:48 crc kubenswrapper[5125]: I1208 19:41:48.251045 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5c6h" event={"ID":"b1940f9f-2825-4e5f-a7b5-1c3ec74450de","Type":"ContainerDied","Data":"9a3e300929e19dac671dcbdd8dcbcb9b092cda296ed12cbc4db622b83d1f0c5c"} Dec 08 19:41:49 crc kubenswrapper[5125]: I1208 19:41:49.335047 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:41:50 crc kubenswrapper[5125]: I1208 19:41:50.267204 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5c6h" event={"ID":"b1940f9f-2825-4e5f-a7b5-1c3ec74450de","Type":"ContainerStarted","Data":"9f1395926e3c5daa06fd8c8b2a1de8dbd5ea3b77be2b36bed9f099243665b37e"} Dec 08 19:41:50 crc kubenswrapper[5125]: I1208 19:41:50.293328 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t5c6h" podStartSLOduration=6.649677517 podStartE2EDuration="7.293119732s" podCreationTimestamp="2025-12-08 19:41:43 +0000 UTC" firstStartedPulling="2025-12-08 19:41:46.231894937 +0000 UTC m=+763.002385211" lastFinishedPulling="2025-12-08 19:41:46.875337162 +0000 UTC m=+763.645827426" observedRunningTime="2025-12-08 19:41:50.292017112 +0000 UTC m=+767.062507396" watchObservedRunningTime="2025-12-08 19:41:50.293119732 +0000 UTC m=+767.063610006" Dec 08 19:41:50 crc kubenswrapper[5125]: I1208 19:41:50.818617 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-5cd794ff55-mzk5l"] Dec 08 19:41:50 crc kubenswrapper[5125]: I1208 19:41:50.819881 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7c28f964-8b5e-4a1c-b85d-0e305a398a1f" containerName="extract" Dec 08 19:41:50 crc kubenswrapper[5125]: I1208 19:41:50.819960 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c28f964-8b5e-4a1c-b85d-0e305a398a1f" containerName="extract" Dec 08 19:41:50 crc kubenswrapper[5125]: I1208 19:41:50.820084 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="97bb7598-b992-43ea-bd3c-71ca692ddebb" containerName="util" Dec 08 19:41:50 crc kubenswrapper[5125]: I1208 19:41:50.820147 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="97bb7598-b992-43ea-bd3c-71ca692ddebb" containerName="util" Dec 08 19:41:50 crc kubenswrapper[5125]: I1208 19:41:50.820219 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7c28f964-8b5e-4a1c-b85d-0e305a398a1f" containerName="pull" Dec 08 19:41:50 crc kubenswrapper[5125]: I1208 19:41:50.820271 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c28f964-8b5e-4a1c-b85d-0e305a398a1f" containerName="pull" Dec 08 19:41:50 crc kubenswrapper[5125]: I1208 19:41:50.820337 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="97bb7598-b992-43ea-bd3c-71ca692ddebb" containerName="extract" Dec 08 19:41:50 crc kubenswrapper[5125]: I1208 19:41:50.820390 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="97bb7598-b992-43ea-bd3c-71ca692ddebb" containerName="extract" Dec 08 19:41:50 crc kubenswrapper[5125]: I1208 19:41:50.820446 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7c28f964-8b5e-4a1c-b85d-0e305a398a1f" containerName="util" Dec 08 19:41:50 crc kubenswrapper[5125]: I1208 19:41:50.820495 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c28f964-8b5e-4a1c-b85d-0e305a398a1f" containerName="util" Dec 08 19:41:50 crc kubenswrapper[5125]: I1208 19:41:50.820546 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="97bb7598-b992-43ea-bd3c-71ca692ddebb" containerName="pull" Dec 08 19:41:50 crc kubenswrapper[5125]: I1208 19:41:50.820591 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="97bb7598-b992-43ea-bd3c-71ca692ddebb" containerName="pull" Dec 08 19:41:50 crc kubenswrapper[5125]: I1208 19:41:50.820762 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="97bb7598-b992-43ea-bd3c-71ca692ddebb" containerName="extract" Dec 08 19:41:50 crc kubenswrapper[5125]: I1208 19:41:50.820843 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="7c28f964-8b5e-4a1c-b85d-0e305a398a1f" containerName="extract" Dec 08 19:41:50 crc kubenswrapper[5125]: I1208 19:41:50.823628 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-5cd794ff55-mzk5l" Dec 08 19:41:50 crc kubenswrapper[5125]: I1208 19:41:50.826768 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-dockercfg-4d7th\"" Dec 08 19:41:50 crc kubenswrapper[5125]: I1208 19:41:50.836987 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-5cd794ff55-mzk5l"] Dec 08 19:41:50 crc kubenswrapper[5125]: I1208 19:41:50.871288 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm9qj\" (UniqueName: \"kubernetes.io/projected/7314013a-416a-4b94-93d8-b2ba4fdbf35e-kube-api-access-sm9qj\") pod \"smart-gateway-operator-5cd794ff55-mzk5l\" (UID: \"7314013a-416a-4b94-93d8-b2ba4fdbf35e\") " pod="service-telemetry/smart-gateway-operator-5cd794ff55-mzk5l" Dec 08 19:41:50 crc kubenswrapper[5125]: I1208 19:41:50.871564 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/7314013a-416a-4b94-93d8-b2ba4fdbf35e-runner\") pod \"smart-gateway-operator-5cd794ff55-mzk5l\" (UID: \"7314013a-416a-4b94-93d8-b2ba4fdbf35e\") " pod="service-telemetry/smart-gateway-operator-5cd794ff55-mzk5l" Dec 08 19:41:50 crc kubenswrapper[5125]: I1208 19:41:50.973236 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sm9qj\" (UniqueName: \"kubernetes.io/projected/7314013a-416a-4b94-93d8-b2ba4fdbf35e-kube-api-access-sm9qj\") pod \"smart-gateway-operator-5cd794ff55-mzk5l\" (UID: \"7314013a-416a-4b94-93d8-b2ba4fdbf35e\") " pod="service-telemetry/smart-gateway-operator-5cd794ff55-mzk5l" Dec 08 19:41:50 crc kubenswrapper[5125]: I1208 19:41:50.973512 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/7314013a-416a-4b94-93d8-b2ba4fdbf35e-runner\") pod \"smart-gateway-operator-5cd794ff55-mzk5l\" (UID: \"7314013a-416a-4b94-93d8-b2ba4fdbf35e\") " pod="service-telemetry/smart-gateway-operator-5cd794ff55-mzk5l" Dec 08 19:41:50 crc kubenswrapper[5125]: I1208 19:41:50.974004 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/7314013a-416a-4b94-93d8-b2ba4fdbf35e-runner\") pod \"smart-gateway-operator-5cd794ff55-mzk5l\" (UID: \"7314013a-416a-4b94-93d8-b2ba4fdbf35e\") " pod="service-telemetry/smart-gateway-operator-5cd794ff55-mzk5l" Dec 08 19:41:51 crc kubenswrapper[5125]: I1208 19:41:51.004308 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm9qj\" (UniqueName: \"kubernetes.io/projected/7314013a-416a-4b94-93d8-b2ba4fdbf35e-kube-api-access-sm9qj\") pod \"smart-gateway-operator-5cd794ff55-mzk5l\" (UID: \"7314013a-416a-4b94-93d8-b2ba4fdbf35e\") " pod="service-telemetry/smart-gateway-operator-5cd794ff55-mzk5l" Dec 08 19:41:51 crc kubenswrapper[5125]: I1208 19:41:51.101918 5125 patch_prober.go:28] interesting pod/machine-config-daemon-slhjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:41:51 crc kubenswrapper[5125]: I1208 19:41:51.101985 5125 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" podUID="d8cea827-b8e3-4d92-adea-df0afd2397da" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:41:51 crc kubenswrapper[5125]: I1208 19:41:51.138136 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-5cd794ff55-mzk5l" Dec 08 19:41:51 crc kubenswrapper[5125]: I1208 19:41:51.377652 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-5cd794ff55-mzk5l"] Dec 08 19:41:51 crc kubenswrapper[5125]: W1208 19:41:51.383158 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7314013a_416a_4b94_93d8_b2ba4fdbf35e.slice/crio-a64989cd695de28f354815562fc02589e1a5563238a7ae535d35ba7d9ad9e3a2 WatchSource:0}: Error finding container a64989cd695de28f354815562fc02589e1a5563238a7ae535d35ba7d9ad9e3a2: Status 404 returned error can't find the container with id a64989cd695de28f354815562fc02589e1a5563238a7ae535d35ba7d9ad9e3a2 Dec 08 19:41:52 crc kubenswrapper[5125]: I1208 19:41:52.122876 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-stzmk"] Dec 08 19:41:52 crc kubenswrapper[5125]: I1208 19:41:52.138922 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-stzmk" Dec 08 19:41:52 crc kubenswrapper[5125]: I1208 19:41:52.145861 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-6ksdg\"" Dec 08 19:41:52 crc kubenswrapper[5125]: I1208 19:41:52.156276 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-stzmk"] Dec 08 19:41:52 crc kubenswrapper[5125]: I1208 19:41:52.189820 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf46w\" (UniqueName: \"kubernetes.io/projected/6c07e085-3d0a-4717-9ae8-43e0c1c00a3c-kube-api-access-gf46w\") pod \"interconnect-operator-78b9bd8798-stzmk\" (UID: \"6c07e085-3d0a-4717-9ae8-43e0c1c00a3c\") " pod="service-telemetry/interconnect-operator-78b9bd8798-stzmk" Dec 08 19:41:52 crc kubenswrapper[5125]: I1208 19:41:52.290916 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gf46w\" (UniqueName: \"kubernetes.io/projected/6c07e085-3d0a-4717-9ae8-43e0c1c00a3c-kube-api-access-gf46w\") pod \"interconnect-operator-78b9bd8798-stzmk\" (UID: \"6c07e085-3d0a-4717-9ae8-43e0c1c00a3c\") " pod="service-telemetry/interconnect-operator-78b9bd8798-stzmk" Dec 08 19:41:52 crc kubenswrapper[5125]: I1208 19:41:52.300040 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-5cd794ff55-mzk5l" event={"ID":"7314013a-416a-4b94-93d8-b2ba4fdbf35e","Type":"ContainerStarted","Data":"a64989cd695de28f354815562fc02589e1a5563238a7ae535d35ba7d9ad9e3a2"} Dec 08 19:41:52 crc kubenswrapper[5125]: I1208 19:41:52.317266 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf46w\" (UniqueName: \"kubernetes.io/projected/6c07e085-3d0a-4717-9ae8-43e0c1c00a3c-kube-api-access-gf46w\") pod \"interconnect-operator-78b9bd8798-stzmk\" (UID: \"6c07e085-3d0a-4717-9ae8-43e0c1c00a3c\") " pod="service-telemetry/interconnect-operator-78b9bd8798-stzmk" Dec 08 19:41:52 crc kubenswrapper[5125]: I1208 19:41:52.500782 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-stzmk" Dec 08 19:41:53 crc kubenswrapper[5125]: I1208 19:41:53.003889 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-stzmk"] Dec 08 19:41:53 crc kubenswrapper[5125]: W1208 19:41:53.020453 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c07e085_3d0a_4717_9ae8_43e0c1c00a3c.slice/crio-e3961b5a6a9928d602bc109344c9b9337fc1fdf86f8918f8f380cd3a028eea9a WatchSource:0}: Error finding container e3961b5a6a9928d602bc109344c9b9337fc1fdf86f8918f8f380cd3a028eea9a: Status 404 returned error can't find the container with id e3961b5a6a9928d602bc109344c9b9337fc1fdf86f8918f8f380cd3a028eea9a Dec 08 19:41:53 crc kubenswrapper[5125]: I1208 19:41:53.313940 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-stzmk" event={"ID":"6c07e085-3d0a-4717-9ae8-43e0c1c00a3c","Type":"ContainerStarted","Data":"e3961b5a6a9928d602bc109344c9b9337fc1fdf86f8918f8f380cd3a028eea9a"} Dec 08 19:41:53 crc kubenswrapper[5125]: I1208 19:41:53.419780 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-79647f8775-rc8dw"] Dec 08 19:41:53 crc kubenswrapper[5125]: I1208 19:41:53.806219 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-79647f8775-rc8dw" Dec 08 19:41:53 crc kubenswrapper[5125]: I1208 19:41:53.808670 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-dockercfg-wf9jq\"" Dec 08 19:41:53 crc kubenswrapper[5125]: I1208 19:41:53.816730 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-79647f8775-rc8dw"] Dec 08 19:41:53 crc kubenswrapper[5125]: I1208 19:41:53.917259 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckkb9\" (UniqueName: \"kubernetes.io/projected/23a46931-52eb-4670-b28d-08719cfc8fa1-kube-api-access-ckkb9\") pod \"service-telemetry-operator-79647f8775-rc8dw\" (UID: \"23a46931-52eb-4670-b28d-08719cfc8fa1\") " pod="service-telemetry/service-telemetry-operator-79647f8775-rc8dw" Dec 08 19:41:53 crc kubenswrapper[5125]: I1208 19:41:53.917310 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/23a46931-52eb-4670-b28d-08719cfc8fa1-runner\") pod \"service-telemetry-operator-79647f8775-rc8dw\" (UID: \"23a46931-52eb-4670-b28d-08719cfc8fa1\") " pod="service-telemetry/service-telemetry-operator-79647f8775-rc8dw" Dec 08 19:41:54 crc kubenswrapper[5125]: I1208 19:41:54.018407 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ckkb9\" (UniqueName: \"kubernetes.io/projected/23a46931-52eb-4670-b28d-08719cfc8fa1-kube-api-access-ckkb9\") pod \"service-telemetry-operator-79647f8775-rc8dw\" (UID: \"23a46931-52eb-4670-b28d-08719cfc8fa1\") " pod="service-telemetry/service-telemetry-operator-79647f8775-rc8dw" Dec 08 19:41:54 crc kubenswrapper[5125]: I1208 19:41:54.021022 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/23a46931-52eb-4670-b28d-08719cfc8fa1-runner\") pod \"service-telemetry-operator-79647f8775-rc8dw\" (UID: \"23a46931-52eb-4670-b28d-08719cfc8fa1\") " pod="service-telemetry/service-telemetry-operator-79647f8775-rc8dw" Dec 08 19:41:54 crc kubenswrapper[5125]: I1208 19:41:54.052065 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckkb9\" (UniqueName: \"kubernetes.io/projected/23a46931-52eb-4670-b28d-08719cfc8fa1-kube-api-access-ckkb9\") pod \"service-telemetry-operator-79647f8775-rc8dw\" (UID: \"23a46931-52eb-4670-b28d-08719cfc8fa1\") " pod="service-telemetry/service-telemetry-operator-79647f8775-rc8dw" Dec 08 19:41:54 crc kubenswrapper[5125]: I1208 19:41:54.076768 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/23a46931-52eb-4670-b28d-08719cfc8fa1-runner\") pod \"service-telemetry-operator-79647f8775-rc8dw\" (UID: \"23a46931-52eb-4670-b28d-08719cfc8fa1\") " pod="service-telemetry/service-telemetry-operator-79647f8775-rc8dw" Dec 08 19:41:54 crc kubenswrapper[5125]: I1208 19:41:54.128571 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-79647f8775-rc8dw" Dec 08 19:41:54 crc kubenswrapper[5125]: I1208 19:41:54.414468 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-79647f8775-rc8dw"] Dec 08 19:41:54 crc kubenswrapper[5125]: W1208 19:41:54.429188 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23a46931_52eb_4670_b28d_08719cfc8fa1.slice/crio-2cb979b543a5f250a6f3a248132e26db3769e620609963c1298e6f8ab55cb545 WatchSource:0}: Error finding container 2cb979b543a5f250a6f3a248132e26db3769e620609963c1298e6f8ab55cb545: Status 404 returned error can't find the container with id 2cb979b543a5f250a6f3a248132e26db3769e620609963c1298e6f8ab55cb545 Dec 08 19:41:54 crc kubenswrapper[5125]: I1208 19:41:54.444381 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-t5c6h" Dec 08 19:41:54 crc kubenswrapper[5125]: I1208 19:41:54.444721 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t5c6h" Dec 08 19:41:55 crc kubenswrapper[5125]: I1208 19:41:55.349896 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-79647f8775-rc8dw" event={"ID":"23a46931-52eb-4670-b28d-08719cfc8fa1","Type":"ContainerStarted","Data":"2cb979b543a5f250a6f3a248132e26db3769e620609963c1298e6f8ab55cb545"} Dec 08 19:41:55 crc kubenswrapper[5125]: I1208 19:41:55.499105 5125 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t5c6h" podUID="b1940f9f-2825-4e5f-a7b5-1c3ec74450de" containerName="registry-server" probeResult="failure" output=< Dec 08 19:41:55 crc kubenswrapper[5125]: timeout: failed to connect service ":50051" within 1s Dec 08 19:41:55 crc kubenswrapper[5125]: > Dec 08 19:42:04 crc kubenswrapper[5125]: I1208 19:42:04.489064 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t5c6h" Dec 08 19:42:04 crc kubenswrapper[5125]: I1208 19:42:04.601231 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t5c6h" Dec 08 19:42:07 crc kubenswrapper[5125]: I1208 19:42:07.904031 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t5c6h"] Dec 08 19:42:07 crc kubenswrapper[5125]: I1208 19:42:07.906158 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t5c6h" podUID="b1940f9f-2825-4e5f-a7b5-1c3ec74450de" containerName="registry-server" containerID="cri-o://9f1395926e3c5daa06fd8c8b2a1de8dbd5ea3b77be2b36bed9f099243665b37e" gracePeriod=2 Dec 08 19:42:08 crc kubenswrapper[5125]: I1208 19:42:08.446599 5125 generic.go:358] "Generic (PLEG): container finished" podID="b1940f9f-2825-4e5f-a7b5-1c3ec74450de" containerID="9f1395926e3c5daa06fd8c8b2a1de8dbd5ea3b77be2b36bed9f099243665b37e" exitCode=0 Dec 08 19:42:08 crc kubenswrapper[5125]: I1208 19:42:08.446992 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5c6h" event={"ID":"b1940f9f-2825-4e5f-a7b5-1c3ec74450de","Type":"ContainerDied","Data":"9f1395926e3c5daa06fd8c8b2a1de8dbd5ea3b77be2b36bed9f099243665b37e"} Dec 08 19:42:11 crc kubenswrapper[5125]: I1208 19:42:11.467252 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5c6h" event={"ID":"b1940f9f-2825-4e5f-a7b5-1c3ec74450de","Type":"ContainerDied","Data":"9db2229158f8cfaf4d1a47efe99778fd0583633e8ac894a45fbcc1adc6a50225"} Dec 08 19:42:11 crc kubenswrapper[5125]: I1208 19:42:11.467600 5125 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9db2229158f8cfaf4d1a47efe99778fd0583633e8ac894a45fbcc1adc6a50225" Dec 08 19:42:11 crc kubenswrapper[5125]: I1208 19:42:11.487891 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5c6h" Dec 08 19:42:11 crc kubenswrapper[5125]: I1208 19:42:11.587626 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1940f9f-2825-4e5f-a7b5-1c3ec74450de-catalog-content\") pod \"b1940f9f-2825-4e5f-a7b5-1c3ec74450de\" (UID: \"b1940f9f-2825-4e5f-a7b5-1c3ec74450de\") " Dec 08 19:42:11 crc kubenswrapper[5125]: I1208 19:42:11.587706 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1940f9f-2825-4e5f-a7b5-1c3ec74450de-utilities\") pod \"b1940f9f-2825-4e5f-a7b5-1c3ec74450de\" (UID: \"b1940f9f-2825-4e5f-a7b5-1c3ec74450de\") " Dec 08 19:42:11 crc kubenswrapper[5125]: I1208 19:42:11.587759 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2v84\" (UniqueName: \"kubernetes.io/projected/b1940f9f-2825-4e5f-a7b5-1c3ec74450de-kube-api-access-f2v84\") pod \"b1940f9f-2825-4e5f-a7b5-1c3ec74450de\" (UID: \"b1940f9f-2825-4e5f-a7b5-1c3ec74450de\") " Dec 08 19:42:11 crc kubenswrapper[5125]: I1208 19:42:11.589677 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1940f9f-2825-4e5f-a7b5-1c3ec74450de-utilities" (OuterVolumeSpecName: "utilities") pod "b1940f9f-2825-4e5f-a7b5-1c3ec74450de" (UID: "b1940f9f-2825-4e5f-a7b5-1c3ec74450de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:42:11 crc kubenswrapper[5125]: I1208 19:42:11.595327 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1940f9f-2825-4e5f-a7b5-1c3ec74450de-kube-api-access-f2v84" (OuterVolumeSpecName: "kube-api-access-f2v84") pod "b1940f9f-2825-4e5f-a7b5-1c3ec74450de" (UID: "b1940f9f-2825-4e5f-a7b5-1c3ec74450de"). InnerVolumeSpecName "kube-api-access-f2v84". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:42:11 crc kubenswrapper[5125]: I1208 19:42:11.689546 5125 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1940f9f-2825-4e5f-a7b5-1c3ec74450de-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:11 crc kubenswrapper[5125]: I1208 19:42:11.689591 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f2v84\" (UniqueName: \"kubernetes.io/projected/b1940f9f-2825-4e5f-a7b5-1c3ec74450de-kube-api-access-f2v84\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:11 crc kubenswrapper[5125]: I1208 19:42:11.690518 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1940f9f-2825-4e5f-a7b5-1c3ec74450de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1940f9f-2825-4e5f-a7b5-1c3ec74450de" (UID: "b1940f9f-2825-4e5f-a7b5-1c3ec74450de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:42:11 crc kubenswrapper[5125]: I1208 19:42:11.790429 5125 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1940f9f-2825-4e5f-a7b5-1c3ec74450de-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:42:12 crc kubenswrapper[5125]: I1208 19:42:12.474781 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5c6h" Dec 08 19:42:12 crc kubenswrapper[5125]: I1208 19:42:12.496537 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t5c6h"] Dec 08 19:42:12 crc kubenswrapper[5125]: I1208 19:42:12.502180 5125 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t5c6h"] Dec 08 19:42:13 crc kubenswrapper[5125]: I1208 19:42:13.779740 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1940f9f-2825-4e5f-a7b5-1c3ec74450de" path="/var/lib/kubelet/pods/b1940f9f-2825-4e5f-a7b5-1c3ec74450de/volumes" Dec 08 19:42:19 crc kubenswrapper[5125]: I1208 19:42:19.550310 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-79647f8775-rc8dw" event={"ID":"23a46931-52eb-4670-b28d-08719cfc8fa1","Type":"ContainerStarted","Data":"0e7ae07f610e57951f7f2dea8e97d5f78077ce2381e575e570ca71e1d4551201"} Dec 08 19:42:19 crc kubenswrapper[5125]: I1208 19:42:19.551893 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-5cd794ff55-mzk5l" event={"ID":"7314013a-416a-4b94-93d8-b2ba4fdbf35e","Type":"ContainerStarted","Data":"52b4cde5eb92b883116f0a2e04ab0b81a435765ed1639267005f91405e2896a0"} Dec 08 19:42:19 crc kubenswrapper[5125]: I1208 19:42:19.553351 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-stzmk" event={"ID":"6c07e085-3d0a-4717-9ae8-43e0c1c00a3c","Type":"ContainerStarted","Data":"b5ddd341b2da2b76a0540b9aecdca207cc34ed1680ff757142d35dda8d911a7c"} Dec 08 19:42:19 crc kubenswrapper[5125]: I1208 19:42:19.574028 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-79647f8775-rc8dw" podStartSLOduration=1.7231838339999999 podStartE2EDuration="26.5740116s" podCreationTimestamp="2025-12-08 19:41:53 +0000 UTC" firstStartedPulling="2025-12-08 19:41:54.43825037 +0000 UTC m=+771.208740654" lastFinishedPulling="2025-12-08 19:42:19.289078146 +0000 UTC m=+796.059568420" observedRunningTime="2025-12-08 19:42:19.568783875 +0000 UTC m=+796.339274159" watchObservedRunningTime="2025-12-08 19:42:19.5740116 +0000 UTC m=+796.344501874" Dec 08 19:42:19 crc kubenswrapper[5125]: I1208 19:42:19.584784 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-stzmk" podStartSLOduration=9.415824555 podStartE2EDuration="27.584761405s" podCreationTimestamp="2025-12-08 19:41:52 +0000 UTC" firstStartedPulling="2025-12-08 19:41:53.022090858 +0000 UTC m=+769.792581132" lastFinishedPulling="2025-12-08 19:42:11.191027708 +0000 UTC m=+787.961517982" observedRunningTime="2025-12-08 19:42:19.584729135 +0000 UTC m=+796.355219419" watchObservedRunningTime="2025-12-08 19:42:19.584761405 +0000 UTC m=+796.355251689" Dec 08 19:42:19 crc kubenswrapper[5125]: I1208 19:42:19.606868 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-5cd794ff55-mzk5l" podStartSLOduration=1.699124874 podStartE2EDuration="29.606848284s" podCreationTimestamp="2025-12-08 19:41:50 +0000 UTC" firstStartedPulling="2025-12-08 19:41:51.384910155 +0000 UTC m=+768.155400429" lastFinishedPulling="2025-12-08 19:42:19.292633545 +0000 UTC m=+796.063123839" observedRunningTime="2025-12-08 19:42:19.60598444 +0000 UTC m=+796.376474724" watchObservedRunningTime="2025-12-08 19:42:19.606848284 +0000 UTC m=+796.377338558" Dec 08 19:42:21 crc kubenswrapper[5125]: I1208 19:42:21.101503 5125 patch_prober.go:28] interesting pod/machine-config-daemon-slhjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:42:21 crc kubenswrapper[5125]: I1208 19:42:21.101961 5125 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" podUID="d8cea827-b8e3-4d92-adea-df0afd2397da" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:42:45 crc kubenswrapper[5125]: I1208 19:42:45.672570 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-2jht7"] Dec 08 19:42:45 crc kubenswrapper[5125]: I1208 19:42:45.675882 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b1940f9f-2825-4e5f-a7b5-1c3ec74450de" containerName="extract-content" Dec 08 19:42:45 crc kubenswrapper[5125]: I1208 19:42:45.675901 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1940f9f-2825-4e5f-a7b5-1c3ec74450de" containerName="extract-content" Dec 08 19:42:45 crc kubenswrapper[5125]: I1208 19:42:45.675921 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b1940f9f-2825-4e5f-a7b5-1c3ec74450de" containerName="extract-utilities" Dec 08 19:42:45 crc kubenswrapper[5125]: I1208 19:42:45.675928 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1940f9f-2825-4e5f-a7b5-1c3ec74450de" containerName="extract-utilities" Dec 08 19:42:45 crc kubenswrapper[5125]: I1208 19:42:45.675956 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b1940f9f-2825-4e5f-a7b5-1c3ec74450de" containerName="registry-server" Dec 08 19:42:45 crc kubenswrapper[5125]: I1208 19:42:45.675962 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1940f9f-2825-4e5f-a7b5-1c3ec74450de" containerName="registry-server" Dec 08 19:42:45 crc kubenswrapper[5125]: I1208 19:42:45.676061 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="b1940f9f-2825-4e5f-a7b5-1c3ec74450de" containerName="registry-server" Dec 08 19:42:46 crc kubenswrapper[5125]: I1208 19:42:46.507433 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-2jht7"] Dec 08 19:42:46 crc kubenswrapper[5125]: I1208 19:42:46.508109 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-2jht7" Dec 08 19:42:46 crc kubenswrapper[5125]: I1208 19:42:46.511818 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-dockercfg-wntl5\"" Dec 08 19:42:46 crc kubenswrapper[5125]: I1208 19:42:46.512309 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-credentials\"" Dec 08 19:42:46 crc kubenswrapper[5125]: I1208 19:42:46.512697 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-credentials\"" Dec 08 19:42:46 crc kubenswrapper[5125]: I1208 19:42:46.512848 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-interconnect-sasl-config\"" Dec 08 19:42:46 crc kubenswrapper[5125]: I1208 19:42:46.512873 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-users\"" Dec 08 19:42:46 crc kubenswrapper[5125]: I1208 19:42:46.513546 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-ca\"" Dec 08 19:42:46 crc kubenswrapper[5125]: I1208 19:42:46.514442 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-ca\"" Dec 08 19:42:46 crc kubenswrapper[5125]: I1208 19:42:46.593350 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-2jht7\" (UID: \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\") " pod="service-telemetry/default-interconnect-55bf8d5cb-2jht7" Dec 08 19:42:46 crc kubenswrapper[5125]: I1208 19:42:46.593402 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-sasl-users\") pod \"default-interconnect-55bf8d5cb-2jht7\" (UID: \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\") " pod="service-telemetry/default-interconnect-55bf8d5cb-2jht7" Dec 08 19:42:46 crc kubenswrapper[5125]: I1208 19:42:46.593515 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-2jht7\" (UID: \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\") " pod="service-telemetry/default-interconnect-55bf8d5cb-2jht7" Dec 08 19:42:46 crc kubenswrapper[5125]: I1208 19:42:46.597440 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h45c4\" (UniqueName: \"kubernetes.io/projected/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-kube-api-access-h45c4\") pod \"default-interconnect-55bf8d5cb-2jht7\" (UID: \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\") " pod="service-telemetry/default-interconnect-55bf8d5cb-2jht7" Dec 08 19:42:46 crc kubenswrapper[5125]: I1208 19:42:46.597699 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-2jht7\" (UID: \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\") " pod="service-telemetry/default-interconnect-55bf8d5cb-2jht7" Dec 08 19:42:46 crc kubenswrapper[5125]: I1208 19:42:46.597904 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-2jht7\" (UID: \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\") " pod="service-telemetry/default-interconnect-55bf8d5cb-2jht7" Dec 08 19:42:46 crc kubenswrapper[5125]: I1208 19:42:46.598105 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-sasl-config\") pod \"default-interconnect-55bf8d5cb-2jht7\" (UID: \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\") " pod="service-telemetry/default-interconnect-55bf8d5cb-2jht7" Dec 08 19:42:46 crc kubenswrapper[5125]: I1208 19:42:46.699237 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-2jht7\" (UID: \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\") " pod="service-telemetry/default-interconnect-55bf8d5cb-2jht7" Dec 08 19:42:46 crc kubenswrapper[5125]: I1208 19:42:46.699284 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-sasl-config\") pod \"default-interconnect-55bf8d5cb-2jht7\" (UID: \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\") " pod="service-telemetry/default-interconnect-55bf8d5cb-2jht7" Dec 08 19:42:46 crc kubenswrapper[5125]: I1208 19:42:46.699332 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-2jht7\" (UID: \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\") " pod="service-telemetry/default-interconnect-55bf8d5cb-2jht7" Dec 08 19:42:46 crc kubenswrapper[5125]: I1208 19:42:46.700484 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-sasl-config\") pod \"default-interconnect-55bf8d5cb-2jht7\" (UID: \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\") " pod="service-telemetry/default-interconnect-55bf8d5cb-2jht7" Dec 08 19:42:46 crc kubenswrapper[5125]: I1208 19:42:46.700526 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-sasl-users\") pod \"default-interconnect-55bf8d5cb-2jht7\" (UID: \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\") " pod="service-telemetry/default-interconnect-55bf8d5cb-2jht7" Dec 08 19:42:46 crc kubenswrapper[5125]: I1208 19:42:46.700566 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-2jht7\" (UID: \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\") " pod="service-telemetry/default-interconnect-55bf8d5cb-2jht7" Dec 08 19:42:46 crc kubenswrapper[5125]: I1208 19:42:46.700584 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h45c4\" (UniqueName: \"kubernetes.io/projected/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-kube-api-access-h45c4\") pod \"default-interconnect-55bf8d5cb-2jht7\" (UID: \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\") " pod="service-telemetry/default-interconnect-55bf8d5cb-2jht7" Dec 08 19:42:46 crc kubenswrapper[5125]: I1208 19:42:46.700641 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-2jht7\" (UID: \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\") " pod="service-telemetry/default-interconnect-55bf8d5cb-2jht7" Dec 08 19:42:46 crc kubenswrapper[5125]: I1208 19:42:46.706771 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-2jht7\" (UID: \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\") " pod="service-telemetry/default-interconnect-55bf8d5cb-2jht7" Dec 08 19:42:46 crc kubenswrapper[5125]: I1208 19:42:46.722485 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-sasl-users\") pod \"default-interconnect-55bf8d5cb-2jht7\" (UID: \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\") " pod="service-telemetry/default-interconnect-55bf8d5cb-2jht7" Dec 08 19:42:46 crc kubenswrapper[5125]: I1208 19:42:46.723041 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-2jht7\" (UID: \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\") " pod="service-telemetry/default-interconnect-55bf8d5cb-2jht7" Dec 08 19:42:46 crc kubenswrapper[5125]: I1208 19:42:46.723050 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-2jht7\" (UID: \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\") " pod="service-telemetry/default-interconnect-55bf8d5cb-2jht7" Dec 08 19:42:46 crc kubenswrapper[5125]: I1208 19:42:46.731392 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h45c4\" (UniqueName: \"kubernetes.io/projected/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-kube-api-access-h45c4\") pod \"default-interconnect-55bf8d5cb-2jht7\" (UID: \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\") " pod="service-telemetry/default-interconnect-55bf8d5cb-2jht7" Dec 08 19:42:46 crc kubenswrapper[5125]: I1208 19:42:46.731558 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-2jht7\" (UID: \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\") " pod="service-telemetry/default-interconnect-55bf8d5cb-2jht7" Dec 08 19:42:46 crc kubenswrapper[5125]: I1208 19:42:46.847931 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-2jht7" Dec 08 19:42:47 crc kubenswrapper[5125]: I1208 19:42:47.042519 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-2jht7"] Dec 08 19:42:47 crc kubenswrapper[5125]: W1208 19:42:47.044790 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda86a2ea5_e88b_4b25_a5ad_95e37bae9428.slice/crio-0c626061692dd528d40f28f1f73e56ecac229a654c770b31592f532820a79120 WatchSource:0}: Error finding container 0c626061692dd528d40f28f1f73e56ecac229a654c770b31592f532820a79120: Status 404 returned error can't find the container with id 0c626061692dd528d40f28f1f73e56ecac229a654c770b31592f532820a79120 Dec 08 19:42:47 crc kubenswrapper[5125]: I1208 19:42:47.831053 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-2jht7" event={"ID":"a86a2ea5-e88b-4b25-a5ad-95e37bae9428","Type":"ContainerStarted","Data":"0c626061692dd528d40f28f1f73e56ecac229a654c770b31592f532820a79120"} Dec 08 19:42:51 crc kubenswrapper[5125]: I1208 19:42:51.100977 5125 patch_prober.go:28] interesting pod/machine-config-daemon-slhjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:42:51 crc kubenswrapper[5125]: I1208 19:42:51.101383 5125 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" podUID="d8cea827-b8e3-4d92-adea-df0afd2397da" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:42:51 crc kubenswrapper[5125]: I1208 19:42:51.101441 5125 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" Dec 08 19:42:51 crc kubenswrapper[5125]: I1208 19:42:51.102206 5125 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f9eb1c7e5f36182d845fb8ea13653363a63738eedc2b7b6ae1600d40f21292c7"} pod="openshift-machine-config-operator/machine-config-daemon-slhjr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 19:42:51 crc kubenswrapper[5125]: I1208 19:42:51.102271 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" podUID="d8cea827-b8e3-4d92-adea-df0afd2397da" containerName="machine-config-daemon" containerID="cri-o://f9eb1c7e5f36182d845fb8ea13653363a63738eedc2b7b6ae1600d40f21292c7" gracePeriod=600 Dec 08 19:42:51 crc kubenswrapper[5125]: I1208 19:42:51.867390 5125 generic.go:358] "Generic (PLEG): container finished" podID="d8cea827-b8e3-4d92-adea-df0afd2397da" containerID="f9eb1c7e5f36182d845fb8ea13653363a63738eedc2b7b6ae1600d40f21292c7" exitCode=0 Dec 08 19:42:51 crc kubenswrapper[5125]: I1208 19:42:51.867472 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" event={"ID":"d8cea827-b8e3-4d92-adea-df0afd2397da","Type":"ContainerDied","Data":"f9eb1c7e5f36182d845fb8ea13653363a63738eedc2b7b6ae1600d40f21292c7"} Dec 08 19:42:51 crc kubenswrapper[5125]: I1208 19:42:51.868043 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" event={"ID":"d8cea827-b8e3-4d92-adea-df0afd2397da","Type":"ContainerStarted","Data":"182a5753b7665f64b7e1bda17a1b8b8ee7e43a6725053ecf79f5513fca73d87e"} Dec 08 19:42:51 crc kubenswrapper[5125]: I1208 19:42:51.868069 5125 scope.go:117] "RemoveContainer" containerID="3eaff9ff574646a35fa068c19d68106caffff9d6e28141d09b7049a7e34edb72" Dec 08 19:42:54 crc kubenswrapper[5125]: I1208 19:42:54.888969 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-2jht7" event={"ID":"a86a2ea5-e88b-4b25-a5ad-95e37bae9428","Type":"ContainerStarted","Data":"97a8e569439335a9b5882d0098e87e5b4b9cc8bd4da7311912b761c027fa5bd3"} Dec 08 19:42:54 crc kubenswrapper[5125]: I1208 19:42:54.922145 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-2jht7" podStartSLOduration=2.803107312 podStartE2EDuration="9.922127016s" podCreationTimestamp="2025-12-08 19:42:45 +0000 UTC" firstStartedPulling="2025-12-08 19:42:47.046924809 +0000 UTC m=+823.817415083" lastFinishedPulling="2025-12-08 19:42:54.165944513 +0000 UTC m=+830.936434787" observedRunningTime="2025-12-08 19:42:54.91938258 +0000 UTC m=+831.689872884" watchObservedRunningTime="2025-12-08 19:42:54.922127016 +0000 UTC m=+831.692617290" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.321178 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.331310 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.333494 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-session-secret\"" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.333553 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default\"" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.333620 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-stf-dockercfg-dlhdk\"" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.333740 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-tls-assets-0\"" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.335324 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-0\"" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.335447 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"serving-certs-ca-bundle\"" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.335325 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-web-config\"" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.335752 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-prometheus-proxy-tls\"" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.345523 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.436620 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9c675e01-cff9-4e81-9b8d-8522d962bb89-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"9c675e01-cff9-4e81-9b8d-8522d962bb89\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.436712 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c675e01-cff9-4e81-9b8d-8522d962bb89-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"9c675e01-cff9-4e81-9b8d-8522d962bb89\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.436746 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9c675e01-cff9-4e81-9b8d-8522d962bb89-web-config\") pod \"prometheus-default-0\" (UID: \"9c675e01-cff9-4e81-9b8d-8522d962bb89\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.436773 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c675e01-cff9-4e81-9b8d-8522d962bb89-config\") pod \"prometheus-default-0\" (UID: \"9c675e01-cff9-4e81-9b8d-8522d962bb89\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.436811 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9c675e01-cff9-4e81-9b8d-8522d962bb89-tls-assets\") pod \"prometheus-default-0\" (UID: \"9c675e01-cff9-4e81-9b8d-8522d962bb89\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.436834 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/9c675e01-cff9-4e81-9b8d-8522d962bb89-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"9c675e01-cff9-4e81-9b8d-8522d962bb89\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.436865 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-352196a1-8c40-423d-9b4f-301f826e1c24\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-352196a1-8c40-423d-9b4f-301f826e1c24\") pod \"prometheus-default-0\" (UID: \"9c675e01-cff9-4e81-9b8d-8522d962bb89\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.436888 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9c675e01-cff9-4e81-9b8d-8522d962bb89-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"9c675e01-cff9-4e81-9b8d-8522d962bb89\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.436916 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9c675e01-cff9-4e81-9b8d-8522d962bb89-config-out\") pod \"prometheus-default-0\" (UID: \"9c675e01-cff9-4e81-9b8d-8522d962bb89\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.436941 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gcml\" (UniqueName: \"kubernetes.io/projected/9c675e01-cff9-4e81-9b8d-8522d962bb89-kube-api-access-2gcml\") pod \"prometheus-default-0\" (UID: \"9c675e01-cff9-4e81-9b8d-8522d962bb89\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.538581 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-352196a1-8c40-423d-9b4f-301f826e1c24\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-352196a1-8c40-423d-9b4f-301f826e1c24\") pod \"prometheus-default-0\" (UID: \"9c675e01-cff9-4e81-9b8d-8522d962bb89\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.538639 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9c675e01-cff9-4e81-9b8d-8522d962bb89-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"9c675e01-cff9-4e81-9b8d-8522d962bb89\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.538674 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9c675e01-cff9-4e81-9b8d-8522d962bb89-config-out\") pod \"prometheus-default-0\" (UID: \"9c675e01-cff9-4e81-9b8d-8522d962bb89\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.538703 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2gcml\" (UniqueName: \"kubernetes.io/projected/9c675e01-cff9-4e81-9b8d-8522d962bb89-kube-api-access-2gcml\") pod \"prometheus-default-0\" (UID: \"9c675e01-cff9-4e81-9b8d-8522d962bb89\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.538764 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9c675e01-cff9-4e81-9b8d-8522d962bb89-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"9c675e01-cff9-4e81-9b8d-8522d962bb89\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.538790 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c675e01-cff9-4e81-9b8d-8522d962bb89-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"9c675e01-cff9-4e81-9b8d-8522d962bb89\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.538814 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9c675e01-cff9-4e81-9b8d-8522d962bb89-web-config\") pod \"prometheus-default-0\" (UID: \"9c675e01-cff9-4e81-9b8d-8522d962bb89\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.538833 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c675e01-cff9-4e81-9b8d-8522d962bb89-config\") pod \"prometheus-default-0\" (UID: \"9c675e01-cff9-4e81-9b8d-8522d962bb89\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.538885 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9c675e01-cff9-4e81-9b8d-8522d962bb89-tls-assets\") pod \"prometheus-default-0\" (UID: \"9c675e01-cff9-4e81-9b8d-8522d962bb89\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.538917 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/9c675e01-cff9-4e81-9b8d-8522d962bb89-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"9c675e01-cff9-4e81-9b8d-8522d962bb89\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:42:56 crc kubenswrapper[5125]: E1208 19:42:56.540123 5125 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Dec 08 19:42:56 crc kubenswrapper[5125]: E1208 19:42:56.540217 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c675e01-cff9-4e81-9b8d-8522d962bb89-secret-default-prometheus-proxy-tls podName:9c675e01-cff9-4e81-9b8d-8522d962bb89 nodeName:}" failed. No retries permitted until 2025-12-08 19:42:57.040193312 +0000 UTC m=+833.810683586 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/9c675e01-cff9-4e81-9b8d-8522d962bb89-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "9c675e01-cff9-4e81-9b8d-8522d962bb89") : secret "default-prometheus-proxy-tls" not found Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.540584 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9c675e01-cff9-4e81-9b8d-8522d962bb89-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"9c675e01-cff9-4e81-9b8d-8522d962bb89\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.541072 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c675e01-cff9-4e81-9b8d-8522d962bb89-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"9c675e01-cff9-4e81-9b8d-8522d962bb89\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.544102 5125 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.544266 5125 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-352196a1-8c40-423d-9b4f-301f826e1c24\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-352196a1-8c40-423d-9b4f-301f826e1c24\") pod \"prometheus-default-0\" (UID: \"9c675e01-cff9-4e81-9b8d-8522d962bb89\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8585570f82cd28f094d5d79969b7faddd51eb92dc16835952fc70e258fbf565f/globalmount\"" pod="service-telemetry/prometheus-default-0" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.545383 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/9c675e01-cff9-4e81-9b8d-8522d962bb89-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"9c675e01-cff9-4e81-9b8d-8522d962bb89\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.545413 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c675e01-cff9-4e81-9b8d-8522d962bb89-config\") pod \"prometheus-default-0\" (UID: \"9c675e01-cff9-4e81-9b8d-8522d962bb89\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.545862 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9c675e01-cff9-4e81-9b8d-8522d962bb89-tls-assets\") pod \"prometheus-default-0\" (UID: \"9c675e01-cff9-4e81-9b8d-8522d962bb89\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.546991 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9c675e01-cff9-4e81-9b8d-8522d962bb89-config-out\") pod \"prometheus-default-0\" (UID: \"9c675e01-cff9-4e81-9b8d-8522d962bb89\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.548528 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9c675e01-cff9-4e81-9b8d-8522d962bb89-web-config\") pod \"prometheus-default-0\" (UID: \"9c675e01-cff9-4e81-9b8d-8522d962bb89\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.566324 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-352196a1-8c40-423d-9b4f-301f826e1c24\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-352196a1-8c40-423d-9b4f-301f826e1c24\") pod \"prometheus-default-0\" (UID: \"9c675e01-cff9-4e81-9b8d-8522d962bb89\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:42:56 crc kubenswrapper[5125]: I1208 19:42:56.573730 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gcml\" (UniqueName: \"kubernetes.io/projected/9c675e01-cff9-4e81-9b8d-8522d962bb89-kube-api-access-2gcml\") pod \"prometheus-default-0\" (UID: \"9c675e01-cff9-4e81-9b8d-8522d962bb89\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:42:57 crc kubenswrapper[5125]: I1208 19:42:57.048398 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9c675e01-cff9-4e81-9b8d-8522d962bb89-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"9c675e01-cff9-4e81-9b8d-8522d962bb89\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:42:57 crc kubenswrapper[5125]: E1208 19:42:57.048595 5125 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Dec 08 19:42:57 crc kubenswrapper[5125]: E1208 19:42:57.048845 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9c675e01-cff9-4e81-9b8d-8522d962bb89-secret-default-prometheus-proxy-tls podName:9c675e01-cff9-4e81-9b8d-8522d962bb89 nodeName:}" failed. No retries permitted until 2025-12-08 19:42:58.048826991 +0000 UTC m=+834.819317265 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/9c675e01-cff9-4e81-9b8d-8522d962bb89-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "9c675e01-cff9-4e81-9b8d-8522d962bb89") : secret "default-prometheus-proxy-tls" not found Dec 08 19:42:58 crc kubenswrapper[5125]: I1208 19:42:58.062552 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9c675e01-cff9-4e81-9b8d-8522d962bb89-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"9c675e01-cff9-4e81-9b8d-8522d962bb89\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:42:58 crc kubenswrapper[5125]: I1208 19:42:58.068257 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9c675e01-cff9-4e81-9b8d-8522d962bb89-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"9c675e01-cff9-4e81-9b8d-8522d962bb89\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:42:58 crc kubenswrapper[5125]: I1208 19:42:58.147564 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Dec 08 19:42:58 crc kubenswrapper[5125]: W1208 19:42:58.547780 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c675e01_cff9_4e81_9b8d_8522d962bb89.slice/crio-daebee34d30e531a08ee9148c54e7ca1872e57c7857aa867e510124378b333fd WatchSource:0}: Error finding container daebee34d30e531a08ee9148c54e7ca1872e57c7857aa867e510124378b333fd: Status 404 returned error can't find the container with id daebee34d30e531a08ee9148c54e7ca1872e57c7857aa867e510124378b333fd Dec 08 19:42:58 crc kubenswrapper[5125]: I1208 19:42:58.551033 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 08 19:42:58 crc kubenswrapper[5125]: I1208 19:42:58.936321 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"9c675e01-cff9-4e81-9b8d-8522d962bb89","Type":"ContainerStarted","Data":"daebee34d30e531a08ee9148c54e7ca1872e57c7857aa867e510124378b333fd"} Dec 08 19:43:02 crc kubenswrapper[5125]: I1208 19:43:02.965947 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"9c675e01-cff9-4e81-9b8d-8522d962bb89","Type":"ContainerStarted","Data":"f790be7a8a5f316da0297013994be507ee4039ad901a8bafaac8a70a4e3bc3ad"} Dec 08 19:43:05 crc kubenswrapper[5125]: I1208 19:43:05.567317 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-k59mb"] Dec 08 19:43:05 crc kubenswrapper[5125]: I1208 19:43:05.581385 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-k59mb" Dec 08 19:43:05 crc kubenswrapper[5125]: I1208 19:43:05.585042 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-k59mb"] Dec 08 19:43:05 crc kubenswrapper[5125]: I1208 19:43:05.678958 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxjdg\" (UniqueName: \"kubernetes.io/projected/bf770656-06d9-478d-9c7a-d35bc01fc80c-kube-api-access-vxjdg\") pod \"default-snmp-webhook-6774d8dfbc-k59mb\" (UID: \"bf770656-06d9-478d-9c7a-d35bc01fc80c\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-k59mb" Dec 08 19:43:05 crc kubenswrapper[5125]: I1208 19:43:05.780898 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vxjdg\" (UniqueName: \"kubernetes.io/projected/bf770656-06d9-478d-9c7a-d35bc01fc80c-kube-api-access-vxjdg\") pod \"default-snmp-webhook-6774d8dfbc-k59mb\" (UID: \"bf770656-06d9-478d-9c7a-d35bc01fc80c\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-k59mb" Dec 08 19:43:05 crc kubenswrapper[5125]: I1208 19:43:05.799987 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxjdg\" (UniqueName: \"kubernetes.io/projected/bf770656-06d9-478d-9c7a-d35bc01fc80c-kube-api-access-vxjdg\") pod \"default-snmp-webhook-6774d8dfbc-k59mb\" (UID: \"bf770656-06d9-478d-9c7a-d35bc01fc80c\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-k59mb" Dec 08 19:43:05 crc kubenswrapper[5125]: I1208 19:43:05.909442 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-k59mb" Dec 08 19:43:06 crc kubenswrapper[5125]: I1208 19:43:06.179844 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-k59mb"] Dec 08 19:43:06 crc kubenswrapper[5125]: W1208 19:43:06.189884 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf770656_06d9_478d_9c7a_d35bc01fc80c.slice/crio-bfc162a0c57a399a94c30d30ce1213b478c9cb15a102caa105395969891e8ac0 WatchSource:0}: Error finding container bfc162a0c57a399a94c30d30ce1213b478c9cb15a102caa105395969891e8ac0: Status 404 returned error can't find the container with id bfc162a0c57a399a94c30d30ce1213b478c9cb15a102caa105395969891e8ac0 Dec 08 19:43:07 crc kubenswrapper[5125]: I1208 19:43:07.002857 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-k59mb" event={"ID":"bf770656-06d9-478d-9c7a-d35bc01fc80c","Type":"ContainerStarted","Data":"bfc162a0c57a399a94c30d30ce1213b478c9cb15a102caa105395969891e8ac0"} Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.017516 5125 generic.go:358] "Generic (PLEG): container finished" podID="9c675e01-cff9-4e81-9b8d-8522d962bb89" containerID="f790be7a8a5f316da0297013994be507ee4039ad901a8bafaac8a70a4e3bc3ad" exitCode=0 Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.017625 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"9c675e01-cff9-4e81-9b8d-8522d962bb89","Type":"ContainerDied","Data":"f790be7a8a5f316da0297013994be507ee4039ad901a8bafaac8a70a4e3bc3ad"} Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.472719 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.494115 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.494324 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.497382 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-web-config\"" Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.497813 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-generated\"" Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.497851 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-cluster-tls-config\"" Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.497905 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-tls-assets-0\"" Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.498299 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-stf-dockercfg-vsr29\"" Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.498440 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-alertmanager-proxy-tls\"" Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.634838 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564-config-out\") pod \"alertmanager-default-0\" (UID: \"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.634899 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.634977 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564-web-config\") pod \"alertmanager-default-0\" (UID: \"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.635008 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564-tls-assets\") pod \"alertmanager-default-0\" (UID: \"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.635044 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-68ad4434-9f70-45ad-8115-948b5eff5230\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-68ad4434-9f70-45ad-8115-948b5eff5230\") pod \"alertmanager-default-0\" (UID: \"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.635081 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564-config-volume\") pod \"alertmanager-default-0\" (UID: \"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.635102 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.635138 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgp56\" (UniqueName: \"kubernetes.io/projected/5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564-kube-api-access-kgp56\") pod \"alertmanager-default-0\" (UID: \"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.635176 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.736084 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564-config-volume\") pod \"alertmanager-default-0\" (UID: \"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.736463 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.736503 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kgp56\" (UniqueName: \"kubernetes.io/projected/5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564-kube-api-access-kgp56\") pod \"alertmanager-default-0\" (UID: \"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.736542 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.736624 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564-config-out\") pod \"alertmanager-default-0\" (UID: \"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.736656 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.736697 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564-web-config\") pod \"alertmanager-default-0\" (UID: \"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.736723 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564-tls-assets\") pod \"alertmanager-default-0\" (UID: \"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:43:09 crc kubenswrapper[5125]: E1208 19:43:09.736740 5125 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.736765 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-68ad4434-9f70-45ad-8115-948b5eff5230\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-68ad4434-9f70-45ad-8115-948b5eff5230\") pod \"alertmanager-default-0\" (UID: \"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:43:09 crc kubenswrapper[5125]: E1208 19:43:09.736838 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564-secret-default-alertmanager-proxy-tls podName:5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564 nodeName:}" failed. No retries permitted until 2025-12-08 19:43:10.236812673 +0000 UTC m=+847.007302947 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564") : secret "default-alertmanager-proxy-tls" not found Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.741331 5125 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.741378 5125 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-68ad4434-9f70-45ad-8115-948b5eff5230\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-68ad4434-9f70-45ad-8115-948b5eff5230\") pod \"alertmanager-default-0\" (UID: \"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/94bee0c6df53f5cc21c79abfa3e3441ef9e86a054eb582be73afdaabac4c48d2/globalmount\"" pod="service-telemetry/alertmanager-default-0" Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.742578 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.743099 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564-config-out\") pod \"alertmanager-default-0\" (UID: \"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.753817 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564-tls-assets\") pod \"alertmanager-default-0\" (UID: \"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.754005 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564-config-volume\") pod \"alertmanager-default-0\" (UID: \"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.754315 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564-web-config\") pod \"alertmanager-default-0\" (UID: \"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.754460 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.758313 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgp56\" (UniqueName: \"kubernetes.io/projected/5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564-kube-api-access-kgp56\") pod \"alertmanager-default-0\" (UID: \"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:43:09 crc kubenswrapper[5125]: I1208 19:43:09.786959 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-68ad4434-9f70-45ad-8115-948b5eff5230\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-68ad4434-9f70-45ad-8115-948b5eff5230\") pod \"alertmanager-default-0\" (UID: \"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:43:10 crc kubenswrapper[5125]: I1208 19:43:10.244549 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:43:10 crc kubenswrapper[5125]: E1208 19:43:10.244784 5125 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 08 19:43:10 crc kubenswrapper[5125]: E1208 19:43:10.244889 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564-secret-default-alertmanager-proxy-tls podName:5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564 nodeName:}" failed. No retries permitted until 2025-12-08 19:43:11.244864876 +0000 UTC m=+848.015355200 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564") : secret "default-alertmanager-proxy-tls" not found Dec 08 19:43:11 crc kubenswrapper[5125]: I1208 19:43:11.258217 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:43:11 crc kubenswrapper[5125]: E1208 19:43:11.258362 5125 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 08 19:43:11 crc kubenswrapper[5125]: E1208 19:43:11.258417 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564-secret-default-alertmanager-proxy-tls podName:5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564 nodeName:}" failed. No retries permitted until 2025-12-08 19:43:13.25840112 +0000 UTC m=+850.028891394 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564") : secret "default-alertmanager-proxy-tls" not found Dec 08 19:43:13 crc kubenswrapper[5125]: I1208 19:43:13.287004 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:43:13 crc kubenswrapper[5125]: I1208 19:43:13.292107 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:43:13 crc kubenswrapper[5125]: I1208 19:43:13.412875 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Dec 08 19:43:13 crc kubenswrapper[5125]: I1208 19:43:13.973362 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 08 19:43:14 crc kubenswrapper[5125]: I1208 19:43:14.050052 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564","Type":"ContainerStarted","Data":"64dfaf6f82f33a7b3842413e2eb3fb7226e56e959c6e806de72390e31426ddb7"} Dec 08 19:43:14 crc kubenswrapper[5125]: I1208 19:43:14.052815 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-k59mb" event={"ID":"bf770656-06d9-478d-9c7a-d35bc01fc80c","Type":"ContainerStarted","Data":"1d5cf2819f38ae4dda85ecf1c60c19fc215841de4d5f0b498b809815d386c962"} Dec 08 19:43:14 crc kubenswrapper[5125]: I1208 19:43:14.075548 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-k59mb" podStartSLOduration=2.00964169 podStartE2EDuration="9.07552645s" podCreationTimestamp="2025-12-08 19:43:05 +0000 UTC" firstStartedPulling="2025-12-08 19:43:06.191175102 +0000 UTC m=+842.961665376" lastFinishedPulling="2025-12-08 19:43:13.257059862 +0000 UTC m=+850.027550136" observedRunningTime="2025-12-08 19:43:14.065223819 +0000 UTC m=+850.835714093" watchObservedRunningTime="2025-12-08 19:43:14.07552645 +0000 UTC m=+850.846016734" Dec 08 19:43:16 crc kubenswrapper[5125]: I1208 19:43:16.066818 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564","Type":"ContainerStarted","Data":"14c7c74a173f14f4867b4d66ea58c195a65b73bc6390d4dc745c5bea946a5b42"} Dec 08 19:43:17 crc kubenswrapper[5125]: I1208 19:43:17.076700 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"9c675e01-cff9-4e81-9b8d-8522d962bb89","Type":"ContainerStarted","Data":"0302aeecb894bd113ff7a4fb987e383dbc610159c88b4b7b47a7f52422833be8"} Dec 08 19:43:19 crc kubenswrapper[5125]: I1208 19:43:19.088063 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"9c675e01-cff9-4e81-9b8d-8522d962bb89","Type":"ContainerStarted","Data":"55bfd9be80759dff2ada7336c8a402d2835a7dd7e6f2d5862da3ddc18c335b7f"} Dec 08 19:43:21 crc kubenswrapper[5125]: I1208 19:43:21.656451 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-zrf9r"] Dec 08 19:43:21 crc kubenswrapper[5125]: I1208 19:43:21.679470 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-zrf9r"] Dec 08 19:43:21 crc kubenswrapper[5125]: I1208 19:43:21.679655 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-zrf9r" Dec 08 19:43:21 crc kubenswrapper[5125]: I1208 19:43:21.682359 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-proxy-tls\"" Dec 08 19:43:21 crc kubenswrapper[5125]: I1208 19:43:21.682781 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-sg-core-configmap\"" Dec 08 19:43:21 crc kubenswrapper[5125]: I1208 19:43:21.683180 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-session-secret\"" Dec 08 19:43:21 crc kubenswrapper[5125]: I1208 19:43:21.685457 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-dockercfg-k4kb8\"" Dec 08 19:43:21 crc kubenswrapper[5125]: I1208 19:43:21.814312 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6d5356cb-6c8c-44ab-aab8-435362446754-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-zrf9r\" (UID: \"6d5356cb-6c8c-44ab-aab8-435362446754\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-zrf9r" Dec 08 19:43:21 crc kubenswrapper[5125]: I1208 19:43:21.814719 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxc4r\" (UniqueName: \"kubernetes.io/projected/6d5356cb-6c8c-44ab-aab8-435362446754-kube-api-access-kxc4r\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-zrf9r\" (UID: \"6d5356cb-6c8c-44ab-aab8-435362446754\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-zrf9r" Dec 08 19:43:21 crc kubenswrapper[5125]: I1208 19:43:21.814853 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/6d5356cb-6c8c-44ab-aab8-435362446754-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-zrf9r\" (UID: \"6d5356cb-6c8c-44ab-aab8-435362446754\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-zrf9r" Dec 08 19:43:21 crc kubenswrapper[5125]: I1208 19:43:21.814892 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/6d5356cb-6c8c-44ab-aab8-435362446754-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-zrf9r\" (UID: \"6d5356cb-6c8c-44ab-aab8-435362446754\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-zrf9r" Dec 08 19:43:21 crc kubenswrapper[5125]: I1208 19:43:21.815038 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/6d5356cb-6c8c-44ab-aab8-435362446754-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-zrf9r\" (UID: \"6d5356cb-6c8c-44ab-aab8-435362446754\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-zrf9r" Dec 08 19:43:21 crc kubenswrapper[5125]: I1208 19:43:21.916262 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/6d5356cb-6c8c-44ab-aab8-435362446754-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-zrf9r\" (UID: \"6d5356cb-6c8c-44ab-aab8-435362446754\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-zrf9r" Dec 08 19:43:21 crc kubenswrapper[5125]: I1208 19:43:21.916413 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6d5356cb-6c8c-44ab-aab8-435362446754-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-zrf9r\" (UID: \"6d5356cb-6c8c-44ab-aab8-435362446754\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-zrf9r" Dec 08 19:43:21 crc kubenswrapper[5125]: I1208 19:43:21.916444 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kxc4r\" (UniqueName: \"kubernetes.io/projected/6d5356cb-6c8c-44ab-aab8-435362446754-kube-api-access-kxc4r\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-zrf9r\" (UID: \"6d5356cb-6c8c-44ab-aab8-435362446754\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-zrf9r" Dec 08 19:43:21 crc kubenswrapper[5125]: I1208 19:43:21.916515 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/6d5356cb-6c8c-44ab-aab8-435362446754-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-zrf9r\" (UID: \"6d5356cb-6c8c-44ab-aab8-435362446754\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-zrf9r" Dec 08 19:43:21 crc kubenswrapper[5125]: I1208 19:43:21.916975 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/6d5356cb-6c8c-44ab-aab8-435362446754-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-zrf9r\" (UID: \"6d5356cb-6c8c-44ab-aab8-435362446754\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-zrf9r" Dec 08 19:43:21 crc kubenswrapper[5125]: E1208 19:43:21.916580 5125 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Dec 08 19:43:21 crc kubenswrapper[5125]: E1208 19:43:21.917342 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d5356cb-6c8c-44ab-aab8-435362446754-default-cloud1-coll-meter-proxy-tls podName:6d5356cb-6c8c-44ab-aab8-435362446754 nodeName:}" failed. No retries permitted until 2025-12-08 19:43:22.417316795 +0000 UTC m=+859.187807089 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/6d5356cb-6c8c-44ab-aab8-435362446754-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-787645d794-zrf9r" (UID: "6d5356cb-6c8c-44ab-aab8-435362446754") : secret "default-cloud1-coll-meter-proxy-tls" not found Dec 08 19:43:21 crc kubenswrapper[5125]: I1208 19:43:21.917543 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/6d5356cb-6c8c-44ab-aab8-435362446754-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-zrf9r\" (UID: \"6d5356cb-6c8c-44ab-aab8-435362446754\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-zrf9r" Dec 08 19:43:21 crc kubenswrapper[5125]: I1208 19:43:21.917913 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/6d5356cb-6c8c-44ab-aab8-435362446754-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-zrf9r\" (UID: \"6d5356cb-6c8c-44ab-aab8-435362446754\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-zrf9r" Dec 08 19:43:21 crc kubenswrapper[5125]: I1208 19:43:21.933189 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/6d5356cb-6c8c-44ab-aab8-435362446754-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-zrf9r\" (UID: \"6d5356cb-6c8c-44ab-aab8-435362446754\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-zrf9r" Dec 08 19:43:21 crc kubenswrapper[5125]: I1208 19:43:21.942750 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxc4r\" (UniqueName: \"kubernetes.io/projected/6d5356cb-6c8c-44ab-aab8-435362446754-kube-api-access-kxc4r\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-zrf9r\" (UID: \"6d5356cb-6c8c-44ab-aab8-435362446754\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-zrf9r" Dec 08 19:43:22 crc kubenswrapper[5125]: I1208 19:43:22.112107 5125 generic.go:358] "Generic (PLEG): container finished" podID="5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564" containerID="14c7c74a173f14f4867b4d66ea58c195a65b73bc6390d4dc745c5bea946a5b42" exitCode=0 Dec 08 19:43:22 crc kubenswrapper[5125]: I1208 19:43:22.112267 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564","Type":"ContainerDied","Data":"14c7c74a173f14f4867b4d66ea58c195a65b73bc6390d4dc745c5bea946a5b42"} Dec 08 19:43:22 crc kubenswrapper[5125]: I1208 19:43:22.425811 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6d5356cb-6c8c-44ab-aab8-435362446754-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-zrf9r\" (UID: \"6d5356cb-6c8c-44ab-aab8-435362446754\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-zrf9r" Dec 08 19:43:22 crc kubenswrapper[5125]: E1208 19:43:22.425959 5125 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Dec 08 19:43:22 crc kubenswrapper[5125]: E1208 19:43:22.426030 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d5356cb-6c8c-44ab-aab8-435362446754-default-cloud1-coll-meter-proxy-tls podName:6d5356cb-6c8c-44ab-aab8-435362446754 nodeName:}" failed. No retries permitted until 2025-12-08 19:43:23.426008905 +0000 UTC m=+860.196499179 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/6d5356cb-6c8c-44ab-aab8-435362446754-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-787645d794-zrf9r" (UID: "6d5356cb-6c8c-44ab-aab8-435362446754") : secret "default-cloud1-coll-meter-proxy-tls" not found Dec 08 19:43:23 crc kubenswrapper[5125]: I1208 19:43:23.438560 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6d5356cb-6c8c-44ab-aab8-435362446754-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-zrf9r\" (UID: \"6d5356cb-6c8c-44ab-aab8-435362446754\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-zrf9r" Dec 08 19:43:23 crc kubenswrapper[5125]: I1208 19:43:23.449961 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6d5356cb-6c8c-44ab-aab8-435362446754-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-zrf9r\" (UID: \"6d5356cb-6c8c-44ab-aab8-435362446754\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-zrf9r" Dec 08 19:43:23 crc kubenswrapper[5125]: I1208 19:43:23.497867 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-zrf9r" Dec 08 19:43:24 crc kubenswrapper[5125]: I1208 19:43:24.342750 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb"] Dec 08 19:43:24 crc kubenswrapper[5125]: I1208 19:43:24.360048 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb"] Dec 08 19:43:24 crc kubenswrapper[5125]: I1208 19:43:24.360175 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb" Dec 08 19:43:24 crc kubenswrapper[5125]: I1208 19:43:24.364537 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-proxy-tls\"" Dec 08 19:43:24 crc kubenswrapper[5125]: I1208 19:43:24.365570 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-sg-core-configmap\"" Dec 08 19:43:24 crc kubenswrapper[5125]: I1208 19:43:24.556272 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/2deaaeb6-cb08-4321-89e5-b16ad67380a8-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb\" (UID: \"2deaaeb6-cb08-4321-89e5-b16ad67380a8\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb" Dec 08 19:43:24 crc kubenswrapper[5125]: I1208 19:43:24.556681 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/2deaaeb6-cb08-4321-89e5-b16ad67380a8-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb\" (UID: \"2deaaeb6-cb08-4321-89e5-b16ad67380a8\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb" Dec 08 19:43:24 crc kubenswrapper[5125]: I1208 19:43:24.556843 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/2deaaeb6-cb08-4321-89e5-b16ad67380a8-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb\" (UID: \"2deaaeb6-cb08-4321-89e5-b16ad67380a8\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb" Dec 08 19:43:24 crc kubenswrapper[5125]: I1208 19:43:24.556898 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnmj6\" (UniqueName: \"kubernetes.io/projected/2deaaeb6-cb08-4321-89e5-b16ad67380a8-kube-api-access-bnmj6\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb\" (UID: \"2deaaeb6-cb08-4321-89e5-b16ad67380a8\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb" Dec 08 19:43:24 crc kubenswrapper[5125]: I1208 19:43:24.557072 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/2deaaeb6-cb08-4321-89e5-b16ad67380a8-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb\" (UID: \"2deaaeb6-cb08-4321-89e5-b16ad67380a8\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb" Dec 08 19:43:24 crc kubenswrapper[5125]: I1208 19:43:24.658562 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/2deaaeb6-cb08-4321-89e5-b16ad67380a8-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb\" (UID: \"2deaaeb6-cb08-4321-89e5-b16ad67380a8\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb" Dec 08 19:43:24 crc kubenswrapper[5125]: I1208 19:43:24.658601 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bnmj6\" (UniqueName: \"kubernetes.io/projected/2deaaeb6-cb08-4321-89e5-b16ad67380a8-kube-api-access-bnmj6\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb\" (UID: \"2deaaeb6-cb08-4321-89e5-b16ad67380a8\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb" Dec 08 19:43:24 crc kubenswrapper[5125]: I1208 19:43:24.658671 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/2deaaeb6-cb08-4321-89e5-b16ad67380a8-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb\" (UID: \"2deaaeb6-cb08-4321-89e5-b16ad67380a8\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb" Dec 08 19:43:24 crc kubenswrapper[5125]: I1208 19:43:24.658699 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/2deaaeb6-cb08-4321-89e5-b16ad67380a8-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb\" (UID: \"2deaaeb6-cb08-4321-89e5-b16ad67380a8\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb" Dec 08 19:43:24 crc kubenswrapper[5125]: I1208 19:43:24.658739 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/2deaaeb6-cb08-4321-89e5-b16ad67380a8-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb\" (UID: \"2deaaeb6-cb08-4321-89e5-b16ad67380a8\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb" Dec 08 19:43:24 crc kubenswrapper[5125]: I1208 19:43:24.659054 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/2deaaeb6-cb08-4321-89e5-b16ad67380a8-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb\" (UID: \"2deaaeb6-cb08-4321-89e5-b16ad67380a8\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb" Dec 08 19:43:24 crc kubenswrapper[5125]: E1208 19:43:24.659125 5125 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 08 19:43:24 crc kubenswrapper[5125]: E1208 19:43:24.659224 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2deaaeb6-cb08-4321-89e5-b16ad67380a8-default-cloud1-ceil-meter-proxy-tls podName:2deaaeb6-cb08-4321-89e5-b16ad67380a8 nodeName:}" failed. No retries permitted until 2025-12-08 19:43:25.159202661 +0000 UTC m=+861.929692935 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/2deaaeb6-cb08-4321-89e5-b16ad67380a8-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb" (UID: "2deaaeb6-cb08-4321-89e5-b16ad67380a8") : secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 08 19:43:24 crc kubenswrapper[5125]: I1208 19:43:24.661000 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/2deaaeb6-cb08-4321-89e5-b16ad67380a8-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb\" (UID: \"2deaaeb6-cb08-4321-89e5-b16ad67380a8\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb" Dec 08 19:43:24 crc kubenswrapper[5125]: I1208 19:43:24.667493 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/2deaaeb6-cb08-4321-89e5-b16ad67380a8-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb\" (UID: \"2deaaeb6-cb08-4321-89e5-b16ad67380a8\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb" Dec 08 19:43:24 crc kubenswrapper[5125]: I1208 19:43:24.678260 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnmj6\" (UniqueName: \"kubernetes.io/projected/2deaaeb6-cb08-4321-89e5-b16ad67380a8-kube-api-access-bnmj6\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb\" (UID: \"2deaaeb6-cb08-4321-89e5-b16ad67380a8\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb" Dec 08 19:43:25 crc kubenswrapper[5125]: I1208 19:43:25.164622 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/2deaaeb6-cb08-4321-89e5-b16ad67380a8-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb\" (UID: \"2deaaeb6-cb08-4321-89e5-b16ad67380a8\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb" Dec 08 19:43:25 crc kubenswrapper[5125]: E1208 19:43:25.164825 5125 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 08 19:43:25 crc kubenswrapper[5125]: E1208 19:43:25.164916 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2deaaeb6-cb08-4321-89e5-b16ad67380a8-default-cloud1-ceil-meter-proxy-tls podName:2deaaeb6-cb08-4321-89e5-b16ad67380a8 nodeName:}" failed. No retries permitted until 2025-12-08 19:43:26.164893408 +0000 UTC m=+862.935383772 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/2deaaeb6-cb08-4321-89e5-b16ad67380a8-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb" (UID: "2deaaeb6-cb08-4321-89e5-b16ad67380a8") : secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 08 19:43:26 crc kubenswrapper[5125]: I1208 19:43:26.013928 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-zrf9r"] Dec 08 19:43:26 crc kubenswrapper[5125]: I1208 19:43:26.144336 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-zrf9r" event={"ID":"6d5356cb-6c8c-44ab-aab8-435362446754","Type":"ContainerStarted","Data":"7165d133a893563513809813fec1fa08cd05124096b685470292b20a1c57edd1"} Dec 08 19:43:26 crc kubenswrapper[5125]: I1208 19:43:26.181676 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/2deaaeb6-cb08-4321-89e5-b16ad67380a8-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb\" (UID: \"2deaaeb6-cb08-4321-89e5-b16ad67380a8\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb" Dec 08 19:43:26 crc kubenswrapper[5125]: I1208 19:43:26.188481 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/2deaaeb6-cb08-4321-89e5-b16ad67380a8-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb\" (UID: \"2deaaeb6-cb08-4321-89e5-b16ad67380a8\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb" Dec 08 19:43:26 crc kubenswrapper[5125]: I1208 19:43:26.478021 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb" Dec 08 19:43:27 crc kubenswrapper[5125]: I1208 19:43:27.154743 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"9c675e01-cff9-4e81-9b8d-8522d962bb89","Type":"ContainerStarted","Data":"6026c5c91e93badccf0a9a5c3b993a02b5fc5f508f00df870b0dd67edb647cf3"} Dec 08 19:43:27 crc kubenswrapper[5125]: I1208 19:43:27.184804 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=4.833091271 podStartE2EDuration="32.184785341s" podCreationTimestamp="2025-12-08 19:42:55 +0000 UTC" firstStartedPulling="2025-12-08 19:42:58.550551552 +0000 UTC m=+835.321041816" lastFinishedPulling="2025-12-08 19:43:25.902245612 +0000 UTC m=+862.672735886" observedRunningTime="2025-12-08 19:43:27.179673971 +0000 UTC m=+863.950164265" watchObservedRunningTime="2025-12-08 19:43:27.184785341 +0000 UTC m=+863.955275615" Dec 08 19:43:27 crc kubenswrapper[5125]: I1208 19:43:27.428655 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb"] Dec 08 19:43:27 crc kubenswrapper[5125]: W1208 19:43:27.434120 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2deaaeb6_cb08_4321_89e5_b16ad67380a8.slice/crio-3382842e32c7f58e3442f79705e6a16f520a2186e2c6b1b85d07f30595c5f940 WatchSource:0}: Error finding container 3382842e32c7f58e3442f79705e6a16f520a2186e2c6b1b85d07f30595c5f940: Status 404 returned error can't find the container with id 3382842e32c7f58e3442f79705e6a16f520a2186e2c6b1b85d07f30595c5f940 Dec 08 19:43:28 crc kubenswrapper[5125]: I1208 19:43:28.148515 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Dec 08 19:43:28 crc kubenswrapper[5125]: I1208 19:43:28.148872 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/prometheus-default-0" Dec 08 19:43:28 crc kubenswrapper[5125]: I1208 19:43:28.162237 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-zrf9r" event={"ID":"6d5356cb-6c8c-44ab-aab8-435362446754","Type":"ContainerStarted","Data":"6719822fef4e28c01682d9a33e4b76040de722fd122c80a1df78d85bdb0749cc"} Dec 08 19:43:28 crc kubenswrapper[5125]: I1208 19:43:28.164072 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564","Type":"ContainerStarted","Data":"680b168616ab92cfe8bcb7e1675147583523b1514866ad8b9e5535223232b993"} Dec 08 19:43:28 crc kubenswrapper[5125]: I1208 19:43:28.165507 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb" event={"ID":"2deaaeb6-cb08-4321-89e5-b16ad67380a8","Type":"ContainerStarted","Data":"53e1b49deb1e79a57d40f2c34d122922651f7bf2d58129266e2a7c57d1f2cab9"} Dec 08 19:43:28 crc kubenswrapper[5125]: I1208 19:43:28.165553 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb" event={"ID":"2deaaeb6-cb08-4321-89e5-b16ad67380a8","Type":"ContainerStarted","Data":"3382842e32c7f58e3442f79705e6a16f520a2186e2c6b1b85d07f30595c5f940"} Dec 08 19:43:28 crc kubenswrapper[5125]: I1208 19:43:28.192481 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Dec 08 19:43:28 crc kubenswrapper[5125]: I1208 19:43:28.384680 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk"] Dec 08 19:43:28 crc kubenswrapper[5125]: I1208 19:43:28.396964 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk"] Dec 08 19:43:28 crc kubenswrapper[5125]: I1208 19:43:28.397120 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk" Dec 08 19:43:28 crc kubenswrapper[5125]: I1208 19:43:28.407127 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-proxy-tls\"" Dec 08 19:43:28 crc kubenswrapper[5125]: I1208 19:43:28.407335 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-sg-core-configmap\"" Dec 08 19:43:28 crc kubenswrapper[5125]: I1208 19:43:28.410504 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59nkl\" (UniqueName: \"kubernetes.io/projected/741bdbe2-0e2d-4d35-bd98-51e84ae1a831-kube-api-access-59nkl\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk\" (UID: \"741bdbe2-0e2d-4d35-bd98-51e84ae1a831\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk" Dec 08 19:43:28 crc kubenswrapper[5125]: I1208 19:43:28.410552 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/741bdbe2-0e2d-4d35-bd98-51e84ae1a831-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk\" (UID: \"741bdbe2-0e2d-4d35-bd98-51e84ae1a831\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk" Dec 08 19:43:28 crc kubenswrapper[5125]: I1208 19:43:28.410580 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/741bdbe2-0e2d-4d35-bd98-51e84ae1a831-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk\" (UID: \"741bdbe2-0e2d-4d35-bd98-51e84ae1a831\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk" Dec 08 19:43:28 crc kubenswrapper[5125]: I1208 19:43:28.410912 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/741bdbe2-0e2d-4d35-bd98-51e84ae1a831-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk\" (UID: \"741bdbe2-0e2d-4d35-bd98-51e84ae1a831\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk" Dec 08 19:43:28 crc kubenswrapper[5125]: I1208 19:43:28.411194 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/741bdbe2-0e2d-4d35-bd98-51e84ae1a831-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk\" (UID: \"741bdbe2-0e2d-4d35-bd98-51e84ae1a831\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk" Dec 08 19:43:28 crc kubenswrapper[5125]: I1208 19:43:28.512385 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/741bdbe2-0e2d-4d35-bd98-51e84ae1a831-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk\" (UID: \"741bdbe2-0e2d-4d35-bd98-51e84ae1a831\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk" Dec 08 19:43:28 crc kubenswrapper[5125]: I1208 19:43:28.513513 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/741bdbe2-0e2d-4d35-bd98-51e84ae1a831-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk\" (UID: \"741bdbe2-0e2d-4d35-bd98-51e84ae1a831\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk" Dec 08 19:43:28 crc kubenswrapper[5125]: I1208 19:43:28.513591 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-59nkl\" (UniqueName: \"kubernetes.io/projected/741bdbe2-0e2d-4d35-bd98-51e84ae1a831-kube-api-access-59nkl\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk\" (UID: \"741bdbe2-0e2d-4d35-bd98-51e84ae1a831\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk" Dec 08 19:43:28 crc kubenswrapper[5125]: I1208 19:43:28.513650 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/741bdbe2-0e2d-4d35-bd98-51e84ae1a831-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk\" (UID: \"741bdbe2-0e2d-4d35-bd98-51e84ae1a831\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk" Dec 08 19:43:28 crc kubenswrapper[5125]: I1208 19:43:28.513671 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/741bdbe2-0e2d-4d35-bd98-51e84ae1a831-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk\" (UID: \"741bdbe2-0e2d-4d35-bd98-51e84ae1a831\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk" Dec 08 19:43:28 crc kubenswrapper[5125]: I1208 19:43:28.513720 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/741bdbe2-0e2d-4d35-bd98-51e84ae1a831-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk\" (UID: \"741bdbe2-0e2d-4d35-bd98-51e84ae1a831\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk" Dec 08 19:43:28 crc kubenswrapper[5125]: E1208 19:43:28.513870 5125 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Dec 08 19:43:28 crc kubenswrapper[5125]: E1208 19:43:28.513971 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/741bdbe2-0e2d-4d35-bd98-51e84ae1a831-default-cloud1-sens-meter-proxy-tls podName:741bdbe2-0e2d-4d35-bd98-51e84ae1a831 nodeName:}" failed. No retries permitted until 2025-12-08 19:43:29.013948002 +0000 UTC m=+865.784438276 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/741bdbe2-0e2d-4d35-bd98-51e84ae1a831-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk" (UID: "741bdbe2-0e2d-4d35-bd98-51e84ae1a831") : secret "default-cloud1-sens-meter-proxy-tls" not found Dec 08 19:43:28 crc kubenswrapper[5125]: I1208 19:43:28.514310 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/741bdbe2-0e2d-4d35-bd98-51e84ae1a831-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk\" (UID: \"741bdbe2-0e2d-4d35-bd98-51e84ae1a831\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk" Dec 08 19:43:28 crc kubenswrapper[5125]: I1208 19:43:28.664973 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-59nkl\" (UniqueName: \"kubernetes.io/projected/741bdbe2-0e2d-4d35-bd98-51e84ae1a831-kube-api-access-59nkl\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk\" (UID: \"741bdbe2-0e2d-4d35-bd98-51e84ae1a831\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk" Dec 08 19:43:28 crc kubenswrapper[5125]: I1208 19:43:28.666011 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/741bdbe2-0e2d-4d35-bd98-51e84ae1a831-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk\" (UID: \"741bdbe2-0e2d-4d35-bd98-51e84ae1a831\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk" Dec 08 19:43:29 crc kubenswrapper[5125]: I1208 19:43:29.025283 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/741bdbe2-0e2d-4d35-bd98-51e84ae1a831-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk\" (UID: \"741bdbe2-0e2d-4d35-bd98-51e84ae1a831\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk" Dec 08 19:43:29 crc kubenswrapper[5125]: E1208 19:43:29.025449 5125 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Dec 08 19:43:29 crc kubenswrapper[5125]: E1208 19:43:29.025549 5125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/741bdbe2-0e2d-4d35-bd98-51e84ae1a831-default-cloud1-sens-meter-proxy-tls podName:741bdbe2-0e2d-4d35-bd98-51e84ae1a831 nodeName:}" failed. No retries permitted until 2025-12-08 19:43:30.025521161 +0000 UTC m=+866.796011465 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/741bdbe2-0e2d-4d35-bd98-51e84ae1a831-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk" (UID: "741bdbe2-0e2d-4d35-bd98-51e84ae1a831") : secret "default-cloud1-sens-meter-proxy-tls" not found Dec 08 19:43:29 crc kubenswrapper[5125]: I1208 19:43:29.177590 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564","Type":"ContainerStarted","Data":"7f3b30b85c5038c229bf6510fa380fcef7ead472b0802fc29ac6c708ab49eafa"} Dec 08 19:43:29 crc kubenswrapper[5125]: I1208 19:43:29.223310 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Dec 08 19:43:30 crc kubenswrapper[5125]: I1208 19:43:30.040181 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/741bdbe2-0e2d-4d35-bd98-51e84ae1a831-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk\" (UID: \"741bdbe2-0e2d-4d35-bd98-51e84ae1a831\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk" Dec 08 19:43:30 crc kubenswrapper[5125]: I1208 19:43:30.046335 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/741bdbe2-0e2d-4d35-bd98-51e84ae1a831-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk\" (UID: \"741bdbe2-0e2d-4d35-bd98-51e84ae1a831\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk" Dec 08 19:43:30 crc kubenswrapper[5125]: I1208 19:43:30.193380 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"5a9e0eeb-3818-4ae3-9d50-d5bcc3aab564","Type":"ContainerStarted","Data":"00588369208ca4b7b3efe763b83a307d254628e546eba8ab15a4e00142af2023"} Dec 08 19:43:30 crc kubenswrapper[5125]: I1208 19:43:30.221109 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk" Dec 08 19:43:30 crc kubenswrapper[5125]: I1208 19:43:30.222394 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=14.840709208 podStartE2EDuration="22.22237242s" podCreationTimestamp="2025-12-08 19:43:08 +0000 UTC" firstStartedPulling="2025-12-08 19:43:22.113161623 +0000 UTC m=+858.883651887" lastFinishedPulling="2025-12-08 19:43:29.494824825 +0000 UTC m=+866.265315099" observedRunningTime="2025-12-08 19:43:30.217035535 +0000 UTC m=+866.987525819" watchObservedRunningTime="2025-12-08 19:43:30.22237242 +0000 UTC m=+866.992862694" Dec 08 19:43:30 crc kubenswrapper[5125]: I1208 19:43:30.688822 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk"] Dec 08 19:43:30 crc kubenswrapper[5125]: W1208 19:43:30.698761 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod741bdbe2_0e2d_4d35_bd98_51e84ae1a831.slice/crio-e6644e517304faf2eafdb2e1ac86ec66eb523ff4bd34e146eee1aeedff4b9313 WatchSource:0}: Error finding container e6644e517304faf2eafdb2e1ac86ec66eb523ff4bd34e146eee1aeedff4b9313: Status 404 returned error can't find the container with id e6644e517304faf2eafdb2e1ac86ec66eb523ff4bd34e146eee1aeedff4b9313 Dec 08 19:43:31 crc kubenswrapper[5125]: I1208 19:43:31.207351 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk" event={"ID":"741bdbe2-0e2d-4d35-bd98-51e84ae1a831","Type":"ContainerStarted","Data":"e6644e517304faf2eafdb2e1ac86ec66eb523ff4bd34e146eee1aeedff4b9313"} Dec 08 19:43:34 crc kubenswrapper[5125]: I1208 19:43:34.908167 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp"] Dec 08 19:43:34 crc kubenswrapper[5125]: I1208 19:43:34.922856 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp" Dec 08 19:43:34 crc kubenswrapper[5125]: I1208 19:43:34.923161 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp"] Dec 08 19:43:34 crc kubenswrapper[5125]: I1208 19:43:34.925290 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-cert\"" Dec 08 19:43:34 crc kubenswrapper[5125]: I1208 19:43:34.926118 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-event-sg-core-configmap\"" Dec 08 19:43:35 crc kubenswrapper[5125]: I1208 19:43:35.016911 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp\" (UID: \"cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp" Dec 08 19:43:35 crc kubenswrapper[5125]: I1208 19:43:35.016958 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp\" (UID: \"cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp" Dec 08 19:43:35 crc kubenswrapper[5125]: I1208 19:43:35.017084 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbx45\" (UniqueName: \"kubernetes.io/projected/cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd-kube-api-access-wbx45\") pod \"default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp\" (UID: \"cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp" Dec 08 19:43:35 crc kubenswrapper[5125]: I1208 19:43:35.017150 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp\" (UID: \"cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp" Dec 08 19:43:35 crc kubenswrapper[5125]: I1208 19:43:35.118131 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wbx45\" (UniqueName: \"kubernetes.io/projected/cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd-kube-api-access-wbx45\") pod \"default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp\" (UID: \"cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp" Dec 08 19:43:35 crc kubenswrapper[5125]: I1208 19:43:35.118196 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp\" (UID: \"cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp" Dec 08 19:43:35 crc kubenswrapper[5125]: I1208 19:43:35.118251 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp\" (UID: \"cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp" Dec 08 19:43:35 crc kubenswrapper[5125]: I1208 19:43:35.118292 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp\" (UID: \"cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp" Dec 08 19:43:35 crc kubenswrapper[5125]: I1208 19:43:35.118816 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp\" (UID: \"cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp" Dec 08 19:43:35 crc kubenswrapper[5125]: I1208 19:43:35.120482 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp\" (UID: \"cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp" Dec 08 19:43:35 crc kubenswrapper[5125]: I1208 19:43:35.129003 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp\" (UID: \"cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp" Dec 08 19:43:35 crc kubenswrapper[5125]: I1208 19:43:35.135718 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbx45\" (UniqueName: \"kubernetes.io/projected/cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd-kube-api-access-wbx45\") pod \"default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp\" (UID: \"cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp" Dec 08 19:43:35 crc kubenswrapper[5125]: I1208 19:43:35.234909 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb" event={"ID":"2deaaeb6-cb08-4321-89e5-b16ad67380a8","Type":"ContainerStarted","Data":"47583a655544e064416e05807920b13cc83763b778243945e8b5f060198d0c30"} Dec 08 19:43:35 crc kubenswrapper[5125]: I1208 19:43:35.237328 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-zrf9r" event={"ID":"6d5356cb-6c8c-44ab-aab8-435362446754","Type":"ContainerStarted","Data":"0aafc4c086cb3e14bf55251500dc2c997938e0f1d7a8c672bb8a4c0b1e867fec"} Dec 08 19:43:35 crc kubenswrapper[5125]: I1208 19:43:35.238376 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp" Dec 08 19:43:35 crc kubenswrapper[5125]: I1208 19:43:35.239049 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk" event={"ID":"741bdbe2-0e2d-4d35-bd98-51e84ae1a831","Type":"ContainerStarted","Data":"5105978fdf9de65422c32fedb96f5ea220a99e14892a408026b69c6b73d3766c"} Dec 08 19:43:36 crc kubenswrapper[5125]: I1208 19:43:36.109910 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp"] Dec 08 19:43:36 crc kubenswrapper[5125]: I1208 19:43:36.246874 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp" event={"ID":"cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd","Type":"ContainerStarted","Data":"0575cb45783dd7dabf568a5cb5b581a2b46ec15f00bdfaefcc36c2b0cd01647a"} Dec 08 19:43:36 crc kubenswrapper[5125]: I1208 19:43:36.248952 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk" event={"ID":"741bdbe2-0e2d-4d35-bd98-51e84ae1a831","Type":"ContainerStarted","Data":"4dde27a169ce21d36dede0563786ff2b6ecbc94688248dbba7298f2b05dc7959"} Dec 08 19:43:36 crc kubenswrapper[5125]: I1208 19:43:36.739517 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r"] Dec 08 19:43:36 crc kubenswrapper[5125]: I1208 19:43:36.743451 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r" Dec 08 19:43:36 crc kubenswrapper[5125]: I1208 19:43:36.749101 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-event-sg-core-configmap\"" Dec 08 19:43:36 crc kubenswrapper[5125]: I1208 19:43:36.749664 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r"] Dec 08 19:43:36 crc kubenswrapper[5125]: I1208 19:43:36.851338 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/077469e1-469a-4837-b27e-a39eb253d98b-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r\" (UID: \"077469e1-469a-4837-b27e-a39eb253d98b\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r" Dec 08 19:43:36 crc kubenswrapper[5125]: I1208 19:43:36.851507 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/077469e1-469a-4837-b27e-a39eb253d98b-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r\" (UID: \"077469e1-469a-4837-b27e-a39eb253d98b\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r" Dec 08 19:43:36 crc kubenswrapper[5125]: I1208 19:43:36.854216 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/077469e1-469a-4837-b27e-a39eb253d98b-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r\" (UID: \"077469e1-469a-4837-b27e-a39eb253d98b\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r" Dec 08 19:43:36 crc kubenswrapper[5125]: I1208 19:43:36.854325 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9875\" (UniqueName: \"kubernetes.io/projected/077469e1-469a-4837-b27e-a39eb253d98b-kube-api-access-t9875\") pod \"default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r\" (UID: \"077469e1-469a-4837-b27e-a39eb253d98b\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r" Dec 08 19:43:36 crc kubenswrapper[5125]: I1208 19:43:36.955944 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/077469e1-469a-4837-b27e-a39eb253d98b-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r\" (UID: \"077469e1-469a-4837-b27e-a39eb253d98b\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r" Dec 08 19:43:36 crc kubenswrapper[5125]: I1208 19:43:36.956026 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/077469e1-469a-4837-b27e-a39eb253d98b-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r\" (UID: \"077469e1-469a-4837-b27e-a39eb253d98b\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r" Dec 08 19:43:36 crc kubenswrapper[5125]: I1208 19:43:36.956118 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t9875\" (UniqueName: \"kubernetes.io/projected/077469e1-469a-4837-b27e-a39eb253d98b-kube-api-access-t9875\") pod \"default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r\" (UID: \"077469e1-469a-4837-b27e-a39eb253d98b\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r" Dec 08 19:43:36 crc kubenswrapper[5125]: I1208 19:43:36.956205 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/077469e1-469a-4837-b27e-a39eb253d98b-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r\" (UID: \"077469e1-469a-4837-b27e-a39eb253d98b\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r" Dec 08 19:43:36 crc kubenswrapper[5125]: I1208 19:43:36.956592 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/077469e1-469a-4837-b27e-a39eb253d98b-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r\" (UID: \"077469e1-469a-4837-b27e-a39eb253d98b\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r" Dec 08 19:43:36 crc kubenswrapper[5125]: I1208 19:43:36.957374 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/077469e1-469a-4837-b27e-a39eb253d98b-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r\" (UID: \"077469e1-469a-4837-b27e-a39eb253d98b\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r" Dec 08 19:43:36 crc kubenswrapper[5125]: I1208 19:43:36.967014 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/077469e1-469a-4837-b27e-a39eb253d98b-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r\" (UID: \"077469e1-469a-4837-b27e-a39eb253d98b\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r" Dec 08 19:43:36 crc kubenswrapper[5125]: I1208 19:43:36.974323 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9875\" (UniqueName: \"kubernetes.io/projected/077469e1-469a-4837-b27e-a39eb253d98b-kube-api-access-t9875\") pod \"default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r\" (UID: \"077469e1-469a-4837-b27e-a39eb253d98b\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r" Dec 08 19:43:37 crc kubenswrapper[5125]: I1208 19:43:37.081320 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r" Dec 08 19:43:37 crc kubenswrapper[5125]: I1208 19:43:37.263338 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp" event={"ID":"cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd","Type":"ContainerStarted","Data":"f60fae619be16c380d8ffeaf1c325edc8862bdfd30581bfb47899fa72a5f1af4"} Dec 08 19:43:37 crc kubenswrapper[5125]: I1208 19:43:37.552899 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r"] Dec 08 19:43:37 crc kubenswrapper[5125]: W1208 19:43:37.576295 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod077469e1_469a_4837_b27e_a39eb253d98b.slice/crio-0e41522a268c5a028662367c1a5efad53f165289ca20914219630d64dbf98526 WatchSource:0}: Error finding container 0e41522a268c5a028662367c1a5efad53f165289ca20914219630d64dbf98526: Status 404 returned error can't find the container with id 0e41522a268c5a028662367c1a5efad53f165289ca20914219630d64dbf98526 Dec 08 19:43:38 crc kubenswrapper[5125]: I1208 19:43:38.271365 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r" event={"ID":"077469e1-469a-4837-b27e-a39eb253d98b","Type":"ContainerStarted","Data":"0e41522a268c5a028662367c1a5efad53f165289ca20914219630d64dbf98526"} Dec 08 19:43:43 crc kubenswrapper[5125]: I1208 19:43:43.305784 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rvpx4"] Dec 08 19:43:45 crc kubenswrapper[5125]: I1208 19:43:45.370341 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rvpx4"] Dec 08 19:43:45 crc kubenswrapper[5125]: I1208 19:43:45.370531 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvpx4" Dec 08 19:43:45 crc kubenswrapper[5125]: I1208 19:43:45.480378 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhhnc\" (UniqueName: \"kubernetes.io/projected/aaa825d2-d84c-4a24-8e68-b718290d504d-kube-api-access-bhhnc\") pod \"certified-operators-rvpx4\" (UID: \"aaa825d2-d84c-4a24-8e68-b718290d504d\") " pod="openshift-marketplace/certified-operators-rvpx4" Dec 08 19:43:45 crc kubenswrapper[5125]: I1208 19:43:45.481116 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaa825d2-d84c-4a24-8e68-b718290d504d-utilities\") pod \"certified-operators-rvpx4\" (UID: \"aaa825d2-d84c-4a24-8e68-b718290d504d\") " pod="openshift-marketplace/certified-operators-rvpx4" Dec 08 19:43:45 crc kubenswrapper[5125]: I1208 19:43:45.481197 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaa825d2-d84c-4a24-8e68-b718290d504d-catalog-content\") pod \"certified-operators-rvpx4\" (UID: \"aaa825d2-d84c-4a24-8e68-b718290d504d\") " pod="openshift-marketplace/certified-operators-rvpx4" Dec 08 19:43:45 crc kubenswrapper[5125]: I1208 19:43:45.603882 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bhhnc\" (UniqueName: \"kubernetes.io/projected/aaa825d2-d84c-4a24-8e68-b718290d504d-kube-api-access-bhhnc\") pod \"certified-operators-rvpx4\" (UID: \"aaa825d2-d84c-4a24-8e68-b718290d504d\") " pod="openshift-marketplace/certified-operators-rvpx4" Dec 08 19:43:45 crc kubenswrapper[5125]: I1208 19:43:45.603959 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaa825d2-d84c-4a24-8e68-b718290d504d-utilities\") pod \"certified-operators-rvpx4\" (UID: \"aaa825d2-d84c-4a24-8e68-b718290d504d\") " pod="openshift-marketplace/certified-operators-rvpx4" Dec 08 19:43:45 crc kubenswrapper[5125]: I1208 19:43:45.603998 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaa825d2-d84c-4a24-8e68-b718290d504d-catalog-content\") pod \"certified-operators-rvpx4\" (UID: \"aaa825d2-d84c-4a24-8e68-b718290d504d\") " pod="openshift-marketplace/certified-operators-rvpx4" Dec 08 19:43:45 crc kubenswrapper[5125]: I1208 19:43:45.604475 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaa825d2-d84c-4a24-8e68-b718290d504d-catalog-content\") pod \"certified-operators-rvpx4\" (UID: \"aaa825d2-d84c-4a24-8e68-b718290d504d\") " pod="openshift-marketplace/certified-operators-rvpx4" Dec 08 19:43:45 crc kubenswrapper[5125]: I1208 19:43:45.604765 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaa825d2-d84c-4a24-8e68-b718290d504d-utilities\") pod \"certified-operators-rvpx4\" (UID: \"aaa825d2-d84c-4a24-8e68-b718290d504d\") " pod="openshift-marketplace/certified-operators-rvpx4" Dec 08 19:43:45 crc kubenswrapper[5125]: I1208 19:43:45.650364 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhhnc\" (UniqueName: \"kubernetes.io/projected/aaa825d2-d84c-4a24-8e68-b718290d504d-kube-api-access-bhhnc\") pod \"certified-operators-rvpx4\" (UID: \"aaa825d2-d84c-4a24-8e68-b718290d504d\") " pod="openshift-marketplace/certified-operators-rvpx4" Dec 08 19:43:45 crc kubenswrapper[5125]: I1208 19:43:45.703223 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvpx4" Dec 08 19:43:48 crc kubenswrapper[5125]: I1208 19:43:48.404982 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rvpx4"] Dec 08 19:43:48 crc kubenswrapper[5125]: W1208 19:43:48.415056 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaaa825d2_d84c_4a24_8e68_b718290d504d.slice/crio-72553030036d5bbae390ec1c0d197972041044f712744b670a34f5318c0e98bc WatchSource:0}: Error finding container 72553030036d5bbae390ec1c0d197972041044f712744b670a34f5318c0e98bc: Status 404 returned error can't find the container with id 72553030036d5bbae390ec1c0d197972041044f712744b670a34f5318c0e98bc Dec 08 19:43:48 crc kubenswrapper[5125]: I1208 19:43:48.695195 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9grpj"] Dec 08 19:43:49 crc kubenswrapper[5125]: I1208 19:43:49.117730 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9grpj" Dec 08 19:43:49 crc kubenswrapper[5125]: I1208 19:43:49.118603 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9grpj"] Dec 08 19:43:49 crc kubenswrapper[5125]: I1208 19:43:49.156823 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h92kd\" (UniqueName: \"kubernetes.io/projected/c4f5a7e7-22ed-47d2-bfea-b73f7df12065-kube-api-access-h92kd\") pod \"community-operators-9grpj\" (UID: \"c4f5a7e7-22ed-47d2-bfea-b73f7df12065\") " pod="openshift-marketplace/community-operators-9grpj" Dec 08 19:43:49 crc kubenswrapper[5125]: I1208 19:43:49.156877 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4f5a7e7-22ed-47d2-bfea-b73f7df12065-catalog-content\") pod \"community-operators-9grpj\" (UID: \"c4f5a7e7-22ed-47d2-bfea-b73f7df12065\") " pod="openshift-marketplace/community-operators-9grpj" Dec 08 19:43:49 crc kubenswrapper[5125]: I1208 19:43:49.157114 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4f5a7e7-22ed-47d2-bfea-b73f7df12065-utilities\") pod \"community-operators-9grpj\" (UID: \"c4f5a7e7-22ed-47d2-bfea-b73f7df12065\") " pod="openshift-marketplace/community-operators-9grpj" Dec 08 19:43:49 crc kubenswrapper[5125]: I1208 19:43:49.258323 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h92kd\" (UniqueName: \"kubernetes.io/projected/c4f5a7e7-22ed-47d2-bfea-b73f7df12065-kube-api-access-h92kd\") pod \"community-operators-9grpj\" (UID: \"c4f5a7e7-22ed-47d2-bfea-b73f7df12065\") " pod="openshift-marketplace/community-operators-9grpj" Dec 08 19:43:49 crc kubenswrapper[5125]: I1208 19:43:49.258713 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4f5a7e7-22ed-47d2-bfea-b73f7df12065-catalog-content\") pod \"community-operators-9grpj\" (UID: \"c4f5a7e7-22ed-47d2-bfea-b73f7df12065\") " pod="openshift-marketplace/community-operators-9grpj" Dec 08 19:43:49 crc kubenswrapper[5125]: I1208 19:43:49.258785 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4f5a7e7-22ed-47d2-bfea-b73f7df12065-utilities\") pod \"community-operators-9grpj\" (UID: \"c4f5a7e7-22ed-47d2-bfea-b73f7df12065\") " pod="openshift-marketplace/community-operators-9grpj" Dec 08 19:43:49 crc kubenswrapper[5125]: I1208 19:43:49.259627 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4f5a7e7-22ed-47d2-bfea-b73f7df12065-catalog-content\") pod \"community-operators-9grpj\" (UID: \"c4f5a7e7-22ed-47d2-bfea-b73f7df12065\") " pod="openshift-marketplace/community-operators-9grpj" Dec 08 19:43:49 crc kubenswrapper[5125]: I1208 19:43:49.259877 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4f5a7e7-22ed-47d2-bfea-b73f7df12065-utilities\") pod \"community-operators-9grpj\" (UID: \"c4f5a7e7-22ed-47d2-bfea-b73f7df12065\") " pod="openshift-marketplace/community-operators-9grpj" Dec 08 19:43:49 crc kubenswrapper[5125]: I1208 19:43:49.290456 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h92kd\" (UniqueName: \"kubernetes.io/projected/c4f5a7e7-22ed-47d2-bfea-b73f7df12065-kube-api-access-h92kd\") pod \"community-operators-9grpj\" (UID: \"c4f5a7e7-22ed-47d2-bfea-b73f7df12065\") " pod="openshift-marketplace/community-operators-9grpj" Dec 08 19:43:49 crc kubenswrapper[5125]: I1208 19:43:49.376984 5125 generic.go:358] "Generic (PLEG): container finished" podID="aaa825d2-d84c-4a24-8e68-b718290d504d" containerID="3f603736700d38652eb48b4dbe0bd17df06ecb7af763ce1abd3e709d01dde091" exitCode=0 Dec 08 19:43:49 crc kubenswrapper[5125]: I1208 19:43:49.377042 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvpx4" event={"ID":"aaa825d2-d84c-4a24-8e68-b718290d504d","Type":"ContainerDied","Data":"3f603736700d38652eb48b4dbe0bd17df06ecb7af763ce1abd3e709d01dde091"} Dec 08 19:43:49 crc kubenswrapper[5125]: I1208 19:43:49.377066 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvpx4" event={"ID":"aaa825d2-d84c-4a24-8e68-b718290d504d","Type":"ContainerStarted","Data":"72553030036d5bbae390ec1c0d197972041044f712744b670a34f5318c0e98bc"} Dec 08 19:43:49 crc kubenswrapper[5125]: I1208 19:43:49.456571 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9grpj" Dec 08 19:43:49 crc kubenswrapper[5125]: I1208 19:43:49.886889 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-2jht7"] Dec 08 19:43:49 crc kubenswrapper[5125]: I1208 19:43:49.887659 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/default-interconnect-55bf8d5cb-2jht7" podUID="a86a2ea5-e88b-4b25-a5ad-95e37bae9428" containerName="default-interconnect" containerID="cri-o://97a8e569439335a9b5882d0098e87e5b4b9cc8bd4da7311912b761c027fa5bd3" gracePeriod=30 Dec 08 19:43:49 crc kubenswrapper[5125]: I1208 19:43:49.940020 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9grpj"] Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.384573 5125 generic.go:358] "Generic (PLEG): container finished" podID="2deaaeb6-cb08-4321-89e5-b16ad67380a8" containerID="47583a655544e064416e05807920b13cc83763b778243945e8b5f060198d0c30" exitCode=0 Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.384870 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb" event={"ID":"2deaaeb6-cb08-4321-89e5-b16ad67380a8","Type":"ContainerDied","Data":"47583a655544e064416e05807920b13cc83763b778243945e8b5f060198d0c30"} Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.384994 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb" event={"ID":"2deaaeb6-cb08-4321-89e5-b16ad67380a8","Type":"ContainerStarted","Data":"337b76340e7d4882af775f2a2f75aaecc9d7b1c91770e4da7fea2151d0ca4e91"} Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.385390 5125 scope.go:117] "RemoveContainer" containerID="47583a655544e064416e05807920b13cc83763b778243945e8b5f060198d0c30" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.388699 5125 generic.go:358] "Generic (PLEG): container finished" podID="a86a2ea5-e88b-4b25-a5ad-95e37bae9428" containerID="97a8e569439335a9b5882d0098e87e5b4b9cc8bd4da7311912b761c027fa5bd3" exitCode=0 Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.388834 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-2jht7" event={"ID":"a86a2ea5-e88b-4b25-a5ad-95e37bae9428","Type":"ContainerDied","Data":"97a8e569439335a9b5882d0098e87e5b4b9cc8bd4da7311912b761c027fa5bd3"} Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.388857 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-2jht7" event={"ID":"a86a2ea5-e88b-4b25-a5ad-95e37bae9428","Type":"ContainerDied","Data":"0c626061692dd528d40f28f1f73e56ecac229a654c770b31592f532820a79120"} Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.388871 5125 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c626061692dd528d40f28f1f73e56ecac229a654c770b31592f532820a79120" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.392086 5125 generic.go:358] "Generic (PLEG): container finished" podID="aaa825d2-d84c-4a24-8e68-b718290d504d" containerID="4a913e059e8a1fb50cda2cd1e424a5d810eb500d709227a06ce70ead66b91069" exitCode=0 Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.392152 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-2jht7" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.392172 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvpx4" event={"ID":"aaa825d2-d84c-4a24-8e68-b718290d504d","Type":"ContainerDied","Data":"4a913e059e8a1fb50cda2cd1e424a5d810eb500d709227a06ce70ead66b91069"} Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.396507 5125 generic.go:358] "Generic (PLEG): container finished" podID="6d5356cb-6c8c-44ab-aab8-435362446754" containerID="0aafc4c086cb3e14bf55251500dc2c997938e0f1d7a8c672bb8a4c0b1e867fec" exitCode=0 Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.396581 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-zrf9r" event={"ID":"6d5356cb-6c8c-44ab-aab8-435362446754","Type":"ContainerDied","Data":"0aafc4c086cb3e14bf55251500dc2c997938e0f1d7a8c672bb8a4c0b1e867fec"} Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.396600 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-zrf9r" event={"ID":"6d5356cb-6c8c-44ab-aab8-435362446754","Type":"ContainerStarted","Data":"8dffcfefa06fef024f9cd8db23f539beed343ee61249f4839231632a9811a611"} Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.397882 5125 scope.go:117] "RemoveContainer" containerID="0aafc4c086cb3e14bf55251500dc2c997938e0f1d7a8c672bb8a4c0b1e867fec" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.404780 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r" event={"ID":"077469e1-469a-4837-b27e-a39eb253d98b","Type":"ContainerStarted","Data":"9cae4762e3526d608a6240aeefcde3f3a681c3881b455e774c2cb2ed12d9de64"} Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.404826 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r" event={"ID":"077469e1-469a-4837-b27e-a39eb253d98b","Type":"ContainerStarted","Data":"75dcacadddcc0038085dfe86a91af920fc98cbaa4f9d56d703f8604d2fe31747"} Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.409921 5125 generic.go:358] "Generic (PLEG): container finished" podID="c4f5a7e7-22ed-47d2-bfea-b73f7df12065" containerID="01646ef06562a901961204c6f04f78b1a4fb50e07c8a502265985fabe3f45870" exitCode=0 Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.410030 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9grpj" event={"ID":"c4f5a7e7-22ed-47d2-bfea-b73f7df12065","Type":"ContainerDied","Data":"01646ef06562a901961204c6f04f78b1a4fb50e07c8a502265985fabe3f45870"} Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.410051 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9grpj" event={"ID":"c4f5a7e7-22ed-47d2-bfea-b73f7df12065","Type":"ContainerStarted","Data":"dd8868566379727404c6c052a4a94f95c6b2925dbe7b27b9bf7e61770788c612"} Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.414105 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp" event={"ID":"cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd","Type":"ContainerStarted","Data":"c44f6207533891bbef129ce847ad5bad02ac975cdce39dff25e1c7621b907973"} Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.417732 5125 generic.go:358] "Generic (PLEG): container finished" podID="741bdbe2-0e2d-4d35-bd98-51e84ae1a831" containerID="4dde27a169ce21d36dede0563786ff2b6ecbc94688248dbba7298f2b05dc7959" exitCode=0 Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.417912 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk" event={"ID":"741bdbe2-0e2d-4d35-bd98-51e84ae1a831","Type":"ContainerDied","Data":"4dde27a169ce21d36dede0563786ff2b6ecbc94688248dbba7298f2b05dc7959"} Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.417964 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk" event={"ID":"741bdbe2-0e2d-4d35-bd98-51e84ae1a831","Type":"ContainerStarted","Data":"2ccc91d7d2b6c7515acb3bab2e08402c3f83fff1855e796fdb0ada8fd093bd58"} Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.418420 5125 scope.go:117] "RemoveContainer" containerID="4dde27a169ce21d36dede0563786ff2b6ecbc94688248dbba7298f2b05dc7959" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.477301 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-default-interconnect-inter-router-ca\") pod \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\" (UID: \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\") " Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.477392 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-default-interconnect-openstack-ca\") pod \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\" (UID: \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\") " Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.477411 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h45c4\" (UniqueName: \"kubernetes.io/projected/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-kube-api-access-h45c4\") pod \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\" (UID: \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\") " Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.477468 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-sasl-users\") pod \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\" (UID: \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\") " Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.477495 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-default-interconnect-inter-router-credentials\") pod \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\" (UID: \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\") " Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.477514 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-default-interconnect-openstack-credentials\") pod \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\" (UID: \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\") " Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.477536 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-sasl-config\") pod \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\" (UID: \"a86a2ea5-e88b-4b25-a5ad-95e37bae9428\") " Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.484075 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "a86a2ea5-e88b-4b25-a5ad-95e37bae9428" (UID: "a86a2ea5-e88b-4b25-a5ad-95e37bae9428"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.488846 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "a86a2ea5-e88b-4b25-a5ad-95e37bae9428" (UID: "a86a2ea5-e88b-4b25-a5ad-95e37bae9428"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.489338 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "a86a2ea5-e88b-4b25-a5ad-95e37bae9428" (UID: "a86a2ea5-e88b-4b25-a5ad-95e37bae9428"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.489409 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "a86a2ea5-e88b-4b25-a5ad-95e37bae9428" (UID: "a86a2ea5-e88b-4b25-a5ad-95e37bae9428"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.489789 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "a86a2ea5-e88b-4b25-a5ad-95e37bae9428" (UID: "a86a2ea5-e88b-4b25-a5ad-95e37bae9428"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.491432 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-24f6m"] Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.493114 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a86a2ea5-e88b-4b25-a5ad-95e37bae9428" containerName="default-interconnect" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.493136 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="a86a2ea5-e88b-4b25-a5ad-95e37bae9428" containerName="default-interconnect" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.493400 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="a86a2ea5-e88b-4b25-a5ad-95e37bae9428" containerName="default-interconnect" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.500497 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-kube-api-access-h45c4" (OuterVolumeSpecName: "kube-api-access-h45c4") pod "a86a2ea5-e88b-4b25-a5ad-95e37bae9428" (UID: "a86a2ea5-e88b-4b25-a5ad-95e37bae9428"). InnerVolumeSpecName "kube-api-access-h45c4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.500717 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "a86a2ea5-e88b-4b25-a5ad-95e37bae9428" (UID: "a86a2ea5-e88b-4b25-a5ad-95e37bae9428"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.508541 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-24f6m" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.513493 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-24f6m"] Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.532724 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r" podStartSLOduration=2.1496858899999998 podStartE2EDuration="14.532708042s" podCreationTimestamp="2025-12-08 19:43:36 +0000 UTC" firstStartedPulling="2025-12-08 19:43:37.585026273 +0000 UTC m=+874.355516547" lastFinishedPulling="2025-12-08 19:43:49.968048425 +0000 UTC m=+886.738538699" observedRunningTime="2025-12-08 19:43:50.48245553 +0000 UTC m=+887.252945824" watchObservedRunningTime="2025-12-08 19:43:50.532708042 +0000 UTC m=+887.303198316" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.570593 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp" podStartSLOduration=3.487390508 podStartE2EDuration="16.570576116s" podCreationTimestamp="2025-12-08 19:43:34 +0000 UTC" firstStartedPulling="2025-12-08 19:43:36.125257995 +0000 UTC m=+872.895748269" lastFinishedPulling="2025-12-08 19:43:49.208443613 +0000 UTC m=+885.978933877" observedRunningTime="2025-12-08 19:43:50.546127519 +0000 UTC m=+887.316617813" watchObservedRunningTime="2025-12-08 19:43:50.570576116 +0000 UTC m=+887.341066390" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.579478 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/bada9555-4b4c-45a2-8479-97c12deb8e88-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-24f6m\" (UID: \"bada9555-4b4c-45a2-8479-97c12deb8e88\") " pod="service-telemetry/default-interconnect-55bf8d5cb-24f6m" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.579516 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/bada9555-4b4c-45a2-8479-97c12deb8e88-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-24f6m\" (UID: \"bada9555-4b4c-45a2-8479-97c12deb8e88\") " pod="service-telemetry/default-interconnect-55bf8d5cb-24f6m" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.579549 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grw55\" (UniqueName: \"kubernetes.io/projected/bada9555-4b4c-45a2-8479-97c12deb8e88-kube-api-access-grw55\") pod \"default-interconnect-55bf8d5cb-24f6m\" (UID: \"bada9555-4b4c-45a2-8479-97c12deb8e88\") " pod="service-telemetry/default-interconnect-55bf8d5cb-24f6m" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.579578 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/bada9555-4b4c-45a2-8479-97c12deb8e88-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-24f6m\" (UID: \"bada9555-4b4c-45a2-8479-97c12deb8e88\") " pod="service-telemetry/default-interconnect-55bf8d5cb-24f6m" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.579705 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/bada9555-4b4c-45a2-8479-97c12deb8e88-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-24f6m\" (UID: \"bada9555-4b4c-45a2-8479-97c12deb8e88\") " pod="service-telemetry/default-interconnect-55bf8d5cb-24f6m" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.579744 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/bada9555-4b4c-45a2-8479-97c12deb8e88-sasl-config\") pod \"default-interconnect-55bf8d5cb-24f6m\" (UID: \"bada9555-4b4c-45a2-8479-97c12deb8e88\") " pod="service-telemetry/default-interconnect-55bf8d5cb-24f6m" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.579776 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/bada9555-4b4c-45a2-8479-97c12deb8e88-sasl-users\") pod \"default-interconnect-55bf8d5cb-24f6m\" (UID: \"bada9555-4b4c-45a2-8479-97c12deb8e88\") " pod="service-telemetry/default-interconnect-55bf8d5cb-24f6m" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.580011 5125 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.580045 5125 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.580057 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h45c4\" (UniqueName: \"kubernetes.io/projected/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-kube-api-access-h45c4\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.580070 5125 reconciler_common.go:299] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-sasl-users\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.580080 5125 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.580091 5125 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.580099 5125 reconciler_common.go:299] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a86a2ea5-e88b-4b25-a5ad-95e37bae9428-sasl-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.681588 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/bada9555-4b4c-45a2-8479-97c12deb8e88-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-24f6m\" (UID: \"bada9555-4b4c-45a2-8479-97c12deb8e88\") " pod="service-telemetry/default-interconnect-55bf8d5cb-24f6m" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.681695 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/bada9555-4b4c-45a2-8479-97c12deb8e88-sasl-config\") pod \"default-interconnect-55bf8d5cb-24f6m\" (UID: \"bada9555-4b4c-45a2-8479-97c12deb8e88\") " pod="service-telemetry/default-interconnect-55bf8d5cb-24f6m" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.681727 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/bada9555-4b4c-45a2-8479-97c12deb8e88-sasl-users\") pod \"default-interconnect-55bf8d5cb-24f6m\" (UID: \"bada9555-4b4c-45a2-8479-97c12deb8e88\") " pod="service-telemetry/default-interconnect-55bf8d5cb-24f6m" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.681792 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/bada9555-4b4c-45a2-8479-97c12deb8e88-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-24f6m\" (UID: \"bada9555-4b4c-45a2-8479-97c12deb8e88\") " pod="service-telemetry/default-interconnect-55bf8d5cb-24f6m" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.681816 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/bada9555-4b4c-45a2-8479-97c12deb8e88-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-24f6m\" (UID: \"bada9555-4b4c-45a2-8479-97c12deb8e88\") " pod="service-telemetry/default-interconnect-55bf8d5cb-24f6m" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.681860 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-grw55\" (UniqueName: \"kubernetes.io/projected/bada9555-4b4c-45a2-8479-97c12deb8e88-kube-api-access-grw55\") pod \"default-interconnect-55bf8d5cb-24f6m\" (UID: \"bada9555-4b4c-45a2-8479-97c12deb8e88\") " pod="service-telemetry/default-interconnect-55bf8d5cb-24f6m" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.681902 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/bada9555-4b4c-45a2-8479-97c12deb8e88-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-24f6m\" (UID: \"bada9555-4b4c-45a2-8479-97c12deb8e88\") " pod="service-telemetry/default-interconnect-55bf8d5cb-24f6m" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.683723 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/bada9555-4b4c-45a2-8479-97c12deb8e88-sasl-config\") pod \"default-interconnect-55bf8d5cb-24f6m\" (UID: \"bada9555-4b4c-45a2-8479-97c12deb8e88\") " pod="service-telemetry/default-interconnect-55bf8d5cb-24f6m" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.699601 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/bada9555-4b4c-45a2-8479-97c12deb8e88-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-24f6m\" (UID: \"bada9555-4b4c-45a2-8479-97c12deb8e88\") " pod="service-telemetry/default-interconnect-55bf8d5cb-24f6m" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.699618 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/bada9555-4b4c-45a2-8479-97c12deb8e88-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-24f6m\" (UID: \"bada9555-4b4c-45a2-8479-97c12deb8e88\") " pod="service-telemetry/default-interconnect-55bf8d5cb-24f6m" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.699997 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/bada9555-4b4c-45a2-8479-97c12deb8e88-sasl-users\") pod \"default-interconnect-55bf8d5cb-24f6m\" (UID: \"bada9555-4b4c-45a2-8479-97c12deb8e88\") " pod="service-telemetry/default-interconnect-55bf8d5cb-24f6m" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.700630 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/bada9555-4b4c-45a2-8479-97c12deb8e88-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-24f6m\" (UID: \"bada9555-4b4c-45a2-8479-97c12deb8e88\") " pod="service-telemetry/default-interconnect-55bf8d5cb-24f6m" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.700984 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/bada9555-4b4c-45a2-8479-97c12deb8e88-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-24f6m\" (UID: \"bada9555-4b4c-45a2-8479-97c12deb8e88\") " pod="service-telemetry/default-interconnect-55bf8d5cb-24f6m" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.703830 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-grw55\" (UniqueName: \"kubernetes.io/projected/bada9555-4b4c-45a2-8479-97c12deb8e88-kube-api-access-grw55\") pod \"default-interconnect-55bf8d5cb-24f6m\" (UID: \"bada9555-4b4c-45a2-8479-97c12deb8e88\") " pod="service-telemetry/default-interconnect-55bf8d5cb-24f6m" Dec 08 19:43:50 crc kubenswrapper[5125]: I1208 19:43:50.857666 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-24f6m" Dec 08 19:43:51 crc kubenswrapper[5125]: I1208 19:43:51.385973 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-24f6m"] Dec 08 19:43:51 crc kubenswrapper[5125]: W1208 19:43:51.391952 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbada9555_4b4c_45a2_8479_97c12deb8e88.slice/crio-7a28c3b6bd8c01f64a971fd598c6be16cfe85b82614cf848311299c57d0ac1d4 WatchSource:0}: Error finding container 7a28c3b6bd8c01f64a971fd598c6be16cfe85b82614cf848311299c57d0ac1d4: Status 404 returned error can't find the container with id 7a28c3b6bd8c01f64a971fd598c6be16cfe85b82614cf848311299c57d0ac1d4 Dec 08 19:43:51 crc kubenswrapper[5125]: I1208 19:43:51.428147 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-zrf9r" event={"ID":"6d5356cb-6c8c-44ab-aab8-435362446754","Type":"ContainerStarted","Data":"6bdc10fec07acc719d7ead2586a52051ddf7420b85226d2ff84fb52463c47285"} Dec 08 19:43:51 crc kubenswrapper[5125]: I1208 19:43:51.430330 5125 generic.go:358] "Generic (PLEG): container finished" podID="077469e1-469a-4837-b27e-a39eb253d98b" containerID="75dcacadddcc0038085dfe86a91af920fc98cbaa4f9d56d703f8604d2fe31747" exitCode=0 Dec 08 19:43:51 crc kubenswrapper[5125]: I1208 19:43:51.430387 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r" event={"ID":"077469e1-469a-4837-b27e-a39eb253d98b","Type":"ContainerDied","Data":"75dcacadddcc0038085dfe86a91af920fc98cbaa4f9d56d703f8604d2fe31747"} Dec 08 19:43:51 crc kubenswrapper[5125]: I1208 19:43:51.430879 5125 scope.go:117] "RemoveContainer" containerID="75dcacadddcc0038085dfe86a91af920fc98cbaa4f9d56d703f8604d2fe31747" Dec 08 19:43:51 crc kubenswrapper[5125]: I1208 19:43:51.441275 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9grpj" event={"ID":"c4f5a7e7-22ed-47d2-bfea-b73f7df12065","Type":"ContainerStarted","Data":"a088724b26a36c4675b668f8ff664f8f04c5a07d988da9791d21d2fb03b36bda"} Dec 08 19:43:51 crc kubenswrapper[5125]: I1208 19:43:51.444114 5125 generic.go:358] "Generic (PLEG): container finished" podID="cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd" containerID="f60fae619be16c380d8ffeaf1c325edc8862bdfd30581bfb47899fa72a5f1af4" exitCode=0 Dec 08 19:43:51 crc kubenswrapper[5125]: I1208 19:43:51.444245 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp" event={"ID":"cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd","Type":"ContainerDied","Data":"f60fae619be16c380d8ffeaf1c325edc8862bdfd30581bfb47899fa72a5f1af4"} Dec 08 19:43:51 crc kubenswrapper[5125]: I1208 19:43:51.444804 5125 scope.go:117] "RemoveContainer" containerID="f60fae619be16c380d8ffeaf1c325edc8862bdfd30581bfb47899fa72a5f1af4" Dec 08 19:43:51 crc kubenswrapper[5125]: I1208 19:43:51.450101 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-zrf9r" podStartSLOduration=5.688920841 podStartE2EDuration="30.45008556s" podCreationTimestamp="2025-12-08 19:43:21 +0000 UTC" firstStartedPulling="2025-12-08 19:43:26.027025389 +0000 UTC m=+862.797515663" lastFinishedPulling="2025-12-08 19:43:50.788190108 +0000 UTC m=+887.558680382" observedRunningTime="2025-12-08 19:43:51.446097762 +0000 UTC m=+888.216588056" watchObservedRunningTime="2025-12-08 19:43:51.45008556 +0000 UTC m=+888.220575834" Dec 08 19:43:51 crc kubenswrapper[5125]: I1208 19:43:51.462221 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk" event={"ID":"741bdbe2-0e2d-4d35-bd98-51e84ae1a831","Type":"ContainerStarted","Data":"949293bbe9e06e4da883ba1c25b9ab4dcadec3ebc69cde17ca35499c9930efd7"} Dec 08 19:43:51 crc kubenswrapper[5125]: I1208 19:43:51.468238 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-24f6m" event={"ID":"bada9555-4b4c-45a2-8479-97c12deb8e88","Type":"ContainerStarted","Data":"7a28c3b6bd8c01f64a971fd598c6be16cfe85b82614cf848311299c57d0ac1d4"} Dec 08 19:43:51 crc kubenswrapper[5125]: I1208 19:43:51.474147 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb" event={"ID":"2deaaeb6-cb08-4321-89e5-b16ad67380a8","Type":"ContainerStarted","Data":"e8125468bd5c9ff72d32ee4e4d2b41e399306d84ff2ea5d78a71b401a975c6b3"} Dec 08 19:43:51 crc kubenswrapper[5125]: I1208 19:43:51.483138 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-2jht7" Dec 08 19:43:51 crc kubenswrapper[5125]: I1208 19:43:51.483628 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvpx4" event={"ID":"aaa825d2-d84c-4a24-8e68-b718290d504d","Type":"ContainerStarted","Data":"7b6bc9de09e6e6adfc09b927c896f7778071b9aab1f254d261f1930a4b2d06ba"} Dec 08 19:43:51 crc kubenswrapper[5125]: I1208 19:43:51.533989 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rvpx4" podStartSLOduration=8.042187252 podStartE2EDuration="8.533970001s" podCreationTimestamp="2025-12-08 19:43:43 +0000 UTC" firstStartedPulling="2025-12-08 19:43:49.377791137 +0000 UTC m=+886.148281401" lastFinishedPulling="2025-12-08 19:43:49.869573876 +0000 UTC m=+886.640064150" observedRunningTime="2025-12-08 19:43:51.530737933 +0000 UTC m=+888.301228217" watchObservedRunningTime="2025-12-08 19:43:51.533970001 +0000 UTC m=+888.304460305" Dec 08 19:43:51 crc kubenswrapper[5125]: I1208 19:43:51.552642 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb" podStartSLOduration=3.96956475 podStartE2EDuration="27.552623351s" podCreationTimestamp="2025-12-08 19:43:24 +0000 UTC" firstStartedPulling="2025-12-08 19:43:27.435745993 +0000 UTC m=+864.206236257" lastFinishedPulling="2025-12-08 19:43:51.018804584 +0000 UTC m=+887.789294858" observedRunningTime="2025-12-08 19:43:51.548675993 +0000 UTC m=+888.319166287" watchObservedRunningTime="2025-12-08 19:43:51.552623351 +0000 UTC m=+888.323113625" Dec 08 19:43:51 crc kubenswrapper[5125]: I1208 19:43:51.567938 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk" podStartSLOduration=3.450761511 podStartE2EDuration="23.567916898s" podCreationTimestamp="2025-12-08 19:43:28 +0000 UTC" firstStartedPulling="2025-12-08 19:43:30.702736016 +0000 UTC m=+867.473226300" lastFinishedPulling="2025-12-08 19:43:50.819891393 +0000 UTC m=+887.590381687" observedRunningTime="2025-12-08 19:43:51.564183926 +0000 UTC m=+888.334674220" watchObservedRunningTime="2025-12-08 19:43:51.567916898 +0000 UTC m=+888.338407182" Dec 08 19:43:51 crc kubenswrapper[5125]: I1208 19:43:51.593224 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-2jht7"] Dec 08 19:43:51 crc kubenswrapper[5125]: I1208 19:43:51.604760 5125 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-2jht7"] Dec 08 19:43:51 crc kubenswrapper[5125]: I1208 19:43:51.778842 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a86a2ea5-e88b-4b25-a5ad-95e37bae9428" path="/var/lib/kubelet/pods/a86a2ea5-e88b-4b25-a5ad-95e37bae9428/volumes" Dec 08 19:43:52 crc kubenswrapper[5125]: I1208 19:43:52.492482 5125 generic.go:358] "Generic (PLEG): container finished" podID="6d5356cb-6c8c-44ab-aab8-435362446754" containerID="6bdc10fec07acc719d7ead2586a52051ddf7420b85226d2ff84fb52463c47285" exitCode=0 Dec 08 19:43:52 crc kubenswrapper[5125]: I1208 19:43:52.492563 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-zrf9r" event={"ID":"6d5356cb-6c8c-44ab-aab8-435362446754","Type":"ContainerDied","Data":"6bdc10fec07acc719d7ead2586a52051ddf7420b85226d2ff84fb52463c47285"} Dec 08 19:43:52 crc kubenswrapper[5125]: I1208 19:43:52.493039 5125 scope.go:117] "RemoveContainer" containerID="6bdc10fec07acc719d7ead2586a52051ddf7420b85226d2ff84fb52463c47285" Dec 08 19:43:52 crc kubenswrapper[5125]: I1208 19:43:52.494097 5125 scope.go:117] "RemoveContainer" containerID="0aafc4c086cb3e14bf55251500dc2c997938e0f1d7a8c672bb8a4c0b1e867fec" Dec 08 19:43:52 crc kubenswrapper[5125]: E1208 19:43:52.494690 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-787645d794-zrf9r_service-telemetry(6d5356cb-6c8c-44ab-aab8-435362446754)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-zrf9r" podUID="6d5356cb-6c8c-44ab-aab8-435362446754" Dec 08 19:43:52 crc kubenswrapper[5125]: I1208 19:43:52.496924 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r" event={"ID":"077469e1-469a-4837-b27e-a39eb253d98b","Type":"ContainerStarted","Data":"41122cfad71117885d436e6d3d2dd91958e9a700b5cb18cb209dc8b026720008"} Dec 08 19:43:52 crc kubenswrapper[5125]: I1208 19:43:52.500891 5125 generic.go:358] "Generic (PLEG): container finished" podID="c4f5a7e7-22ed-47d2-bfea-b73f7df12065" containerID="a088724b26a36c4675b668f8ff664f8f04c5a07d988da9791d21d2fb03b36bda" exitCode=0 Dec 08 19:43:52 crc kubenswrapper[5125]: I1208 19:43:52.500987 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9grpj" event={"ID":"c4f5a7e7-22ed-47d2-bfea-b73f7df12065","Type":"ContainerDied","Data":"a088724b26a36c4675b668f8ff664f8f04c5a07d988da9791d21d2fb03b36bda"} Dec 08 19:43:52 crc kubenswrapper[5125]: I1208 19:43:52.504002 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp" event={"ID":"cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd","Type":"ContainerStarted","Data":"654a2f0ffaea929a4da38d3de091aedcc2e4c9229292e57eaf65f7d92d90ff4e"} Dec 08 19:43:52 crc kubenswrapper[5125]: I1208 19:43:52.510887 5125 generic.go:358] "Generic (PLEG): container finished" podID="741bdbe2-0e2d-4d35-bd98-51e84ae1a831" containerID="949293bbe9e06e4da883ba1c25b9ab4dcadec3ebc69cde17ca35499c9930efd7" exitCode=0 Dec 08 19:43:52 crc kubenswrapper[5125]: I1208 19:43:52.511025 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk" event={"ID":"741bdbe2-0e2d-4d35-bd98-51e84ae1a831","Type":"ContainerDied","Data":"949293bbe9e06e4da883ba1c25b9ab4dcadec3ebc69cde17ca35499c9930efd7"} Dec 08 19:43:52 crc kubenswrapper[5125]: I1208 19:43:52.511673 5125 scope.go:117] "RemoveContainer" containerID="949293bbe9e06e4da883ba1c25b9ab4dcadec3ebc69cde17ca35499c9930efd7" Dec 08 19:43:52 crc kubenswrapper[5125]: E1208 19:43:52.511997 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk_service-telemetry(741bdbe2-0e2d-4d35-bd98-51e84ae1a831)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk" podUID="741bdbe2-0e2d-4d35-bd98-51e84ae1a831" Dec 08 19:43:52 crc kubenswrapper[5125]: I1208 19:43:52.536430 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-24f6m" event={"ID":"bada9555-4b4c-45a2-8479-97c12deb8e88","Type":"ContainerStarted","Data":"252c19d56adeb6e957fbe08e6941a97046adac2fe5a2cf5ac1514e58d6881dbc"} Dec 08 19:43:52 crc kubenswrapper[5125]: I1208 19:43:52.554514 5125 generic.go:358] "Generic (PLEG): container finished" podID="2deaaeb6-cb08-4321-89e5-b16ad67380a8" containerID="e8125468bd5c9ff72d32ee4e4d2b41e399306d84ff2ea5d78a71b401a975c6b3" exitCode=0 Dec 08 19:43:52 crc kubenswrapper[5125]: I1208 19:43:52.554802 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb" event={"ID":"2deaaeb6-cb08-4321-89e5-b16ad67380a8","Type":"ContainerDied","Data":"e8125468bd5c9ff72d32ee4e4d2b41e399306d84ff2ea5d78a71b401a975c6b3"} Dec 08 19:43:52 crc kubenswrapper[5125]: I1208 19:43:52.577916 5125 scope.go:117] "RemoveContainer" containerID="e8125468bd5c9ff72d32ee4e4d2b41e399306d84ff2ea5d78a71b401a975c6b3" Dec 08 19:43:52 crc kubenswrapper[5125]: E1208 19:43:52.578496 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb_service-telemetry(2deaaeb6-cb08-4321-89e5-b16ad67380a8)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb" podUID="2deaaeb6-cb08-4321-89e5-b16ad67380a8" Dec 08 19:43:52 crc kubenswrapper[5125]: I1208 19:43:52.657085 5125 scope.go:117] "RemoveContainer" containerID="4dde27a169ce21d36dede0563786ff2b6ecbc94688248dbba7298f2b05dc7959" Dec 08 19:43:52 crc kubenswrapper[5125]: I1208 19:43:52.770482 5125 scope.go:117] "RemoveContainer" containerID="47583a655544e064416e05807920b13cc83763b778243945e8b5f060198d0c30" Dec 08 19:43:53 crc kubenswrapper[5125]: I1208 19:43:53.564661 5125 scope.go:117] "RemoveContainer" containerID="e8125468bd5c9ff72d32ee4e4d2b41e399306d84ff2ea5d78a71b401a975c6b3" Dec 08 19:43:53 crc kubenswrapper[5125]: E1208 19:43:53.565537 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb_service-telemetry(2deaaeb6-cb08-4321-89e5-b16ad67380a8)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb" podUID="2deaaeb6-cb08-4321-89e5-b16ad67380a8" Dec 08 19:43:53 crc kubenswrapper[5125]: I1208 19:43:53.569305 5125 scope.go:117] "RemoveContainer" containerID="6bdc10fec07acc719d7ead2586a52051ddf7420b85226d2ff84fb52463c47285" Dec 08 19:43:53 crc kubenswrapper[5125]: E1208 19:43:53.569493 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-787645d794-zrf9r_service-telemetry(6d5356cb-6c8c-44ab-aab8-435362446754)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-zrf9r" podUID="6d5356cb-6c8c-44ab-aab8-435362446754" Dec 08 19:43:53 crc kubenswrapper[5125]: I1208 19:43:53.571848 5125 generic.go:358] "Generic (PLEG): container finished" podID="077469e1-469a-4837-b27e-a39eb253d98b" containerID="41122cfad71117885d436e6d3d2dd91958e9a700b5cb18cb209dc8b026720008" exitCode=0 Dec 08 19:43:53 crc kubenswrapper[5125]: I1208 19:43:53.571963 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r" event={"ID":"077469e1-469a-4837-b27e-a39eb253d98b","Type":"ContainerDied","Data":"41122cfad71117885d436e6d3d2dd91958e9a700b5cb18cb209dc8b026720008"} Dec 08 19:43:53 crc kubenswrapper[5125]: I1208 19:43:53.571995 5125 scope.go:117] "RemoveContainer" containerID="75dcacadddcc0038085dfe86a91af920fc98cbaa4f9d56d703f8604d2fe31747" Dec 08 19:43:53 crc kubenswrapper[5125]: I1208 19:43:53.572296 5125 scope.go:117] "RemoveContainer" containerID="41122cfad71117885d436e6d3d2dd91958e9a700b5cb18cb209dc8b026720008" Dec 08 19:43:53 crc kubenswrapper[5125]: E1208 19:43:53.572474 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r_service-telemetry(077469e1-469a-4837-b27e-a39eb253d98b)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r" podUID="077469e1-469a-4837-b27e-a39eb253d98b" Dec 08 19:43:53 crc kubenswrapper[5125]: I1208 19:43:53.583330 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9grpj" event={"ID":"c4f5a7e7-22ed-47d2-bfea-b73f7df12065","Type":"ContainerStarted","Data":"947ec4492d6a163781d9b8bf10cf79072fc89962accce1ba3cc4513260fb7c46"} Dec 08 19:43:53 crc kubenswrapper[5125]: I1208 19:43:53.591940 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-24f6m" podStartSLOduration=4.591919232 podStartE2EDuration="4.591919232s" podCreationTimestamp="2025-12-08 19:43:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:43:52.790258964 +0000 UTC m=+889.560749238" watchObservedRunningTime="2025-12-08 19:43:53.591919232 +0000 UTC m=+890.362409506" Dec 08 19:43:53 crc kubenswrapper[5125]: I1208 19:43:53.597087 5125 generic.go:358] "Generic (PLEG): container finished" podID="cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd" containerID="654a2f0ffaea929a4da38d3de091aedcc2e4c9229292e57eaf65f7d92d90ff4e" exitCode=0 Dec 08 19:43:53 crc kubenswrapper[5125]: I1208 19:43:53.597241 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp" event={"ID":"cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd","Type":"ContainerDied","Data":"654a2f0ffaea929a4da38d3de091aedcc2e4c9229292e57eaf65f7d92d90ff4e"} Dec 08 19:43:53 crc kubenswrapper[5125]: I1208 19:43:53.597764 5125 scope.go:117] "RemoveContainer" containerID="654a2f0ffaea929a4da38d3de091aedcc2e4c9229292e57eaf65f7d92d90ff4e" Dec 08 19:43:53 crc kubenswrapper[5125]: E1208 19:43:53.598089 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp_service-telemetry(cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp" podUID="cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd" Dec 08 19:43:53 crc kubenswrapper[5125]: I1208 19:43:53.603396 5125 scope.go:117] "RemoveContainer" containerID="949293bbe9e06e4da883ba1c25b9ab4dcadec3ebc69cde17ca35499c9930efd7" Dec 08 19:43:53 crc kubenswrapper[5125]: E1208 19:43:53.603645 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk_service-telemetry(741bdbe2-0e2d-4d35-bd98-51e84ae1a831)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk" podUID="741bdbe2-0e2d-4d35-bd98-51e84ae1a831" Dec 08 19:43:53 crc kubenswrapper[5125]: I1208 19:43:53.618323 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9grpj" podStartSLOduration=5.071550354 podStartE2EDuration="5.618307383s" podCreationTimestamp="2025-12-08 19:43:48 +0000 UTC" firstStartedPulling="2025-12-08 19:43:50.410843375 +0000 UTC m=+887.181333649" lastFinishedPulling="2025-12-08 19:43:50.957600404 +0000 UTC m=+887.728090678" observedRunningTime="2025-12-08 19:43:53.615565538 +0000 UTC m=+890.386055832" watchObservedRunningTime="2025-12-08 19:43:53.618307383 +0000 UTC m=+890.388797657" Dec 08 19:43:53 crc kubenswrapper[5125]: I1208 19:43:53.626978 5125 scope.go:117] "RemoveContainer" containerID="f60fae619be16c380d8ffeaf1c325edc8862bdfd30581bfb47899fa72a5f1af4" Dec 08 19:43:54 crc kubenswrapper[5125]: I1208 19:43:54.612964 5125 scope.go:117] "RemoveContainer" containerID="41122cfad71117885d436e6d3d2dd91958e9a700b5cb18cb209dc8b026720008" Dec 08 19:43:54 crc kubenswrapper[5125]: E1208 19:43:54.613259 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r_service-telemetry(077469e1-469a-4837-b27e-a39eb253d98b)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r" podUID="077469e1-469a-4837-b27e-a39eb253d98b" Dec 08 19:43:54 crc kubenswrapper[5125]: I1208 19:43:54.616569 5125 scope.go:117] "RemoveContainer" containerID="654a2f0ffaea929a4da38d3de091aedcc2e4c9229292e57eaf65f7d92d90ff4e" Dec 08 19:43:54 crc kubenswrapper[5125]: E1208 19:43:54.616923 5125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp_service-telemetry(cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp" podUID="cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd" Dec 08 19:43:55 crc kubenswrapper[5125]: I1208 19:43:55.703766 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rvpx4" Dec 08 19:43:55 crc kubenswrapper[5125]: I1208 19:43:55.703982 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-rvpx4" Dec 08 19:43:55 crc kubenswrapper[5125]: I1208 19:43:55.746828 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rvpx4" Dec 08 19:43:56 crc kubenswrapper[5125]: I1208 19:43:56.686385 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rvpx4" Dec 08 19:43:57 crc kubenswrapper[5125]: I1208 19:43:57.880093 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rvpx4"] Dec 08 19:43:58 crc kubenswrapper[5125]: I1208 19:43:58.663576 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rvpx4" podUID="aaa825d2-d84c-4a24-8e68-b718290d504d" containerName="registry-server" containerID="cri-o://7b6bc9de09e6e6adfc09b927c896f7778071b9aab1f254d261f1930a4b2d06ba" gracePeriod=2 Dec 08 19:43:59 crc kubenswrapper[5125]: I1208 19:43:59.060558 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvpx4" Dec 08 19:43:59 crc kubenswrapper[5125]: I1208 19:43:59.096380 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhhnc\" (UniqueName: \"kubernetes.io/projected/aaa825d2-d84c-4a24-8e68-b718290d504d-kube-api-access-bhhnc\") pod \"aaa825d2-d84c-4a24-8e68-b718290d504d\" (UID: \"aaa825d2-d84c-4a24-8e68-b718290d504d\") " Dec 08 19:43:59 crc kubenswrapper[5125]: I1208 19:43:59.096458 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaa825d2-d84c-4a24-8e68-b718290d504d-utilities\") pod \"aaa825d2-d84c-4a24-8e68-b718290d504d\" (UID: \"aaa825d2-d84c-4a24-8e68-b718290d504d\") " Dec 08 19:43:59 crc kubenswrapper[5125]: I1208 19:43:59.096651 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaa825d2-d84c-4a24-8e68-b718290d504d-catalog-content\") pod \"aaa825d2-d84c-4a24-8e68-b718290d504d\" (UID: \"aaa825d2-d84c-4a24-8e68-b718290d504d\") " Dec 08 19:43:59 crc kubenswrapper[5125]: I1208 19:43:59.098734 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaa825d2-d84c-4a24-8e68-b718290d504d-utilities" (OuterVolumeSpecName: "utilities") pod "aaa825d2-d84c-4a24-8e68-b718290d504d" (UID: "aaa825d2-d84c-4a24-8e68-b718290d504d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:43:59 crc kubenswrapper[5125]: I1208 19:43:59.102405 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaa825d2-d84c-4a24-8e68-b718290d504d-kube-api-access-bhhnc" (OuterVolumeSpecName: "kube-api-access-bhhnc") pod "aaa825d2-d84c-4a24-8e68-b718290d504d" (UID: "aaa825d2-d84c-4a24-8e68-b718290d504d"). InnerVolumeSpecName "kube-api-access-bhhnc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:43:59 crc kubenswrapper[5125]: I1208 19:43:59.142760 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaa825d2-d84c-4a24-8e68-b718290d504d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aaa825d2-d84c-4a24-8e68-b718290d504d" (UID: "aaa825d2-d84c-4a24-8e68-b718290d504d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:43:59 crc kubenswrapper[5125]: I1208 19:43:59.198285 5125 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaa825d2-d84c-4a24-8e68-b718290d504d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:59 crc kubenswrapper[5125]: I1208 19:43:59.198315 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bhhnc\" (UniqueName: \"kubernetes.io/projected/aaa825d2-d84c-4a24-8e68-b718290d504d-kube-api-access-bhhnc\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:59 crc kubenswrapper[5125]: I1208 19:43:59.198325 5125 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaa825d2-d84c-4a24-8e68-b718290d504d-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:43:59 crc kubenswrapper[5125]: I1208 19:43:59.458487 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9grpj" Dec 08 19:43:59 crc kubenswrapper[5125]: I1208 19:43:59.458535 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-9grpj" Dec 08 19:43:59 crc kubenswrapper[5125]: I1208 19:43:59.508701 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9grpj" Dec 08 19:43:59 crc kubenswrapper[5125]: I1208 19:43:59.673163 5125 generic.go:358] "Generic (PLEG): container finished" podID="aaa825d2-d84c-4a24-8e68-b718290d504d" containerID="7b6bc9de09e6e6adfc09b927c896f7778071b9aab1f254d261f1930a4b2d06ba" exitCode=0 Dec 08 19:43:59 crc kubenswrapper[5125]: I1208 19:43:59.673205 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvpx4" event={"ID":"aaa825d2-d84c-4a24-8e68-b718290d504d","Type":"ContainerDied","Data":"7b6bc9de09e6e6adfc09b927c896f7778071b9aab1f254d261f1930a4b2d06ba"} Dec 08 19:43:59 crc kubenswrapper[5125]: I1208 19:43:59.673671 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvpx4" event={"ID":"aaa825d2-d84c-4a24-8e68-b718290d504d","Type":"ContainerDied","Data":"72553030036d5bbae390ec1c0d197972041044f712744b670a34f5318c0e98bc"} Dec 08 19:43:59 crc kubenswrapper[5125]: I1208 19:43:59.673240 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvpx4" Dec 08 19:43:59 crc kubenswrapper[5125]: I1208 19:43:59.673697 5125 scope.go:117] "RemoveContainer" containerID="7b6bc9de09e6e6adfc09b927c896f7778071b9aab1f254d261f1930a4b2d06ba" Dec 08 19:43:59 crc kubenswrapper[5125]: I1208 19:43:59.713986 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rvpx4"] Dec 08 19:43:59 crc kubenswrapper[5125]: I1208 19:43:59.714122 5125 scope.go:117] "RemoveContainer" containerID="4a913e059e8a1fb50cda2cd1e424a5d810eb500d709227a06ce70ead66b91069" Dec 08 19:43:59 crc kubenswrapper[5125]: I1208 19:43:59.721046 5125 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rvpx4"] Dec 08 19:43:59 crc kubenswrapper[5125]: I1208 19:43:59.721216 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9grpj" Dec 08 19:43:59 crc kubenswrapper[5125]: I1208 19:43:59.745459 5125 scope.go:117] "RemoveContainer" containerID="3f603736700d38652eb48b4dbe0bd17df06ecb7af763ce1abd3e709d01dde091" Dec 08 19:43:59 crc kubenswrapper[5125]: I1208 19:43:59.766692 5125 scope.go:117] "RemoveContainer" containerID="7b6bc9de09e6e6adfc09b927c896f7778071b9aab1f254d261f1930a4b2d06ba" Dec 08 19:43:59 crc kubenswrapper[5125]: E1208 19:43:59.767172 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b6bc9de09e6e6adfc09b927c896f7778071b9aab1f254d261f1930a4b2d06ba\": container with ID starting with 7b6bc9de09e6e6adfc09b927c896f7778071b9aab1f254d261f1930a4b2d06ba not found: ID does not exist" containerID="7b6bc9de09e6e6adfc09b927c896f7778071b9aab1f254d261f1930a4b2d06ba" Dec 08 19:43:59 crc kubenswrapper[5125]: I1208 19:43:59.767213 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b6bc9de09e6e6adfc09b927c896f7778071b9aab1f254d261f1930a4b2d06ba"} err="failed to get container status \"7b6bc9de09e6e6adfc09b927c896f7778071b9aab1f254d261f1930a4b2d06ba\": rpc error: code = NotFound desc = could not find container \"7b6bc9de09e6e6adfc09b927c896f7778071b9aab1f254d261f1930a4b2d06ba\": container with ID starting with 7b6bc9de09e6e6adfc09b927c896f7778071b9aab1f254d261f1930a4b2d06ba not found: ID does not exist" Dec 08 19:43:59 crc kubenswrapper[5125]: I1208 19:43:59.767236 5125 scope.go:117] "RemoveContainer" containerID="4a913e059e8a1fb50cda2cd1e424a5d810eb500d709227a06ce70ead66b91069" Dec 08 19:43:59 crc kubenswrapper[5125]: E1208 19:43:59.767766 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a913e059e8a1fb50cda2cd1e424a5d810eb500d709227a06ce70ead66b91069\": container with ID starting with 4a913e059e8a1fb50cda2cd1e424a5d810eb500d709227a06ce70ead66b91069 not found: ID does not exist" containerID="4a913e059e8a1fb50cda2cd1e424a5d810eb500d709227a06ce70ead66b91069" Dec 08 19:43:59 crc kubenswrapper[5125]: I1208 19:43:59.767865 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a913e059e8a1fb50cda2cd1e424a5d810eb500d709227a06ce70ead66b91069"} err="failed to get container status \"4a913e059e8a1fb50cda2cd1e424a5d810eb500d709227a06ce70ead66b91069\": rpc error: code = NotFound desc = could not find container \"4a913e059e8a1fb50cda2cd1e424a5d810eb500d709227a06ce70ead66b91069\": container with ID starting with 4a913e059e8a1fb50cda2cd1e424a5d810eb500d709227a06ce70ead66b91069 not found: ID does not exist" Dec 08 19:43:59 crc kubenswrapper[5125]: I1208 19:43:59.767946 5125 scope.go:117] "RemoveContainer" containerID="3f603736700d38652eb48b4dbe0bd17df06ecb7af763ce1abd3e709d01dde091" Dec 08 19:43:59 crc kubenswrapper[5125]: E1208 19:43:59.768292 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f603736700d38652eb48b4dbe0bd17df06ecb7af763ce1abd3e709d01dde091\": container with ID starting with 3f603736700d38652eb48b4dbe0bd17df06ecb7af763ce1abd3e709d01dde091 not found: ID does not exist" containerID="3f603736700d38652eb48b4dbe0bd17df06ecb7af763ce1abd3e709d01dde091" Dec 08 19:43:59 crc kubenswrapper[5125]: I1208 19:43:59.768321 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f603736700d38652eb48b4dbe0bd17df06ecb7af763ce1abd3e709d01dde091"} err="failed to get container status \"3f603736700d38652eb48b4dbe0bd17df06ecb7af763ce1abd3e709d01dde091\": rpc error: code = NotFound desc = could not find container \"3f603736700d38652eb48b4dbe0bd17df06ecb7af763ce1abd3e709d01dde091\": container with ID starting with 3f603736700d38652eb48b4dbe0bd17df06ecb7af763ce1abd3e709d01dde091 not found: ID does not exist" Dec 08 19:43:59 crc kubenswrapper[5125]: I1208 19:43:59.778085 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaa825d2-d84c-4a24-8e68-b718290d504d" path="/var/lib/kubelet/pods/aaa825d2-d84c-4a24-8e68-b718290d504d/volumes" Dec 08 19:44:01 crc kubenswrapper[5125]: I1208 19:44:01.881277 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9grpj"] Dec 08 19:44:01 crc kubenswrapper[5125]: I1208 19:44:01.881677 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9grpj" podUID="c4f5a7e7-22ed-47d2-bfea-b73f7df12065" containerName="registry-server" containerID="cri-o://947ec4492d6a163781d9b8bf10cf79072fc89962accce1ba3cc4513260fb7c46" gracePeriod=2 Dec 08 19:44:02 crc kubenswrapper[5125]: I1208 19:44:02.728972 5125 generic.go:358] "Generic (PLEG): container finished" podID="c4f5a7e7-22ed-47d2-bfea-b73f7df12065" containerID="947ec4492d6a163781d9b8bf10cf79072fc89962accce1ba3cc4513260fb7c46" exitCode=0 Dec 08 19:44:02 crc kubenswrapper[5125]: I1208 19:44:02.729337 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9grpj" event={"ID":"c4f5a7e7-22ed-47d2-bfea-b73f7df12065","Type":"ContainerDied","Data":"947ec4492d6a163781d9b8bf10cf79072fc89962accce1ba3cc4513260fb7c46"} Dec 08 19:44:02 crc kubenswrapper[5125]: I1208 19:44:02.767088 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9grpj" Dec 08 19:44:02 crc kubenswrapper[5125]: I1208 19:44:02.851065 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4f5a7e7-22ed-47d2-bfea-b73f7df12065-utilities\") pod \"c4f5a7e7-22ed-47d2-bfea-b73f7df12065\" (UID: \"c4f5a7e7-22ed-47d2-bfea-b73f7df12065\") " Dec 08 19:44:02 crc kubenswrapper[5125]: I1208 19:44:02.851216 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h92kd\" (UniqueName: \"kubernetes.io/projected/c4f5a7e7-22ed-47d2-bfea-b73f7df12065-kube-api-access-h92kd\") pod \"c4f5a7e7-22ed-47d2-bfea-b73f7df12065\" (UID: \"c4f5a7e7-22ed-47d2-bfea-b73f7df12065\") " Dec 08 19:44:02 crc kubenswrapper[5125]: I1208 19:44:02.851308 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4f5a7e7-22ed-47d2-bfea-b73f7df12065-catalog-content\") pod \"c4f5a7e7-22ed-47d2-bfea-b73f7df12065\" (UID: \"c4f5a7e7-22ed-47d2-bfea-b73f7df12065\") " Dec 08 19:44:02 crc kubenswrapper[5125]: I1208 19:44:02.852461 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4f5a7e7-22ed-47d2-bfea-b73f7df12065-utilities" (OuterVolumeSpecName: "utilities") pod "c4f5a7e7-22ed-47d2-bfea-b73f7df12065" (UID: "c4f5a7e7-22ed-47d2-bfea-b73f7df12065"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:44:02 crc kubenswrapper[5125]: I1208 19:44:02.858978 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4f5a7e7-22ed-47d2-bfea-b73f7df12065-kube-api-access-h92kd" (OuterVolumeSpecName: "kube-api-access-h92kd") pod "c4f5a7e7-22ed-47d2-bfea-b73f7df12065" (UID: "c4f5a7e7-22ed-47d2-bfea-b73f7df12065"). InnerVolumeSpecName "kube-api-access-h92kd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:44:02 crc kubenswrapper[5125]: I1208 19:44:02.908997 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4f5a7e7-22ed-47d2-bfea-b73f7df12065-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4f5a7e7-22ed-47d2-bfea-b73f7df12065" (UID: "c4f5a7e7-22ed-47d2-bfea-b73f7df12065"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:44:02 crc kubenswrapper[5125]: I1208 19:44:02.952372 5125 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4f5a7e7-22ed-47d2-bfea-b73f7df12065-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:44:02 crc kubenswrapper[5125]: I1208 19:44:02.952407 5125 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4f5a7e7-22ed-47d2-bfea-b73f7df12065-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:44:02 crc kubenswrapper[5125]: I1208 19:44:02.952417 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h92kd\" (UniqueName: \"kubernetes.io/projected/c4f5a7e7-22ed-47d2-bfea-b73f7df12065-kube-api-access-h92kd\") on node \"crc\" DevicePath \"\"" Dec 08 19:44:03 crc kubenswrapper[5125]: I1208 19:44:03.744801 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9grpj" Dec 08 19:44:03 crc kubenswrapper[5125]: I1208 19:44:03.744814 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9grpj" event={"ID":"c4f5a7e7-22ed-47d2-bfea-b73f7df12065","Type":"ContainerDied","Data":"dd8868566379727404c6c052a4a94f95c6b2925dbe7b27b9bf7e61770788c612"} Dec 08 19:44:03 crc kubenswrapper[5125]: I1208 19:44:03.744881 5125 scope.go:117] "RemoveContainer" containerID="947ec4492d6a163781d9b8bf10cf79072fc89962accce1ba3cc4513260fb7c46" Dec 08 19:44:03 crc kubenswrapper[5125]: I1208 19:44:03.762988 5125 scope.go:117] "RemoveContainer" containerID="a088724b26a36c4675b668f8ff664f8f04c5a07d988da9791d21d2fb03b36bda" Dec 08 19:44:03 crc kubenswrapper[5125]: I1208 19:44:03.793747 5125 scope.go:117] "RemoveContainer" containerID="01646ef06562a901961204c6f04f78b1a4fb50e07c8a502265985fabe3f45870" Dec 08 19:44:03 crc kubenswrapper[5125]: I1208 19:44:03.807911 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9grpj"] Dec 08 19:44:03 crc kubenswrapper[5125]: I1208 19:44:03.817300 5125 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9grpj"] Dec 08 19:44:04 crc kubenswrapper[5125]: I1208 19:44:04.087339 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9p7g8_b938d768-ccce-45a6-a982-3f5d6f1a7d98/kube-multus/0.log" Dec 08 19:44:04 crc kubenswrapper[5125]: I1208 19:44:04.094903 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9p7g8_b938d768-ccce-45a6-a982-3f5d6f1a7d98/kube-multus/0.log" Dec 08 19:44:04 crc kubenswrapper[5125]: I1208 19:44:04.102222 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 19:44:04 crc kubenswrapper[5125]: I1208 19:44:04.107358 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 19:44:04 crc kubenswrapper[5125]: I1208 19:44:04.767253 5125 scope.go:117] "RemoveContainer" containerID="949293bbe9e06e4da883ba1c25b9ab4dcadec3ebc69cde17ca35499c9930efd7" Dec 08 19:44:05 crc kubenswrapper[5125]: I1208 19:44:05.758679 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-89vzk" event={"ID":"741bdbe2-0e2d-4d35-bd98-51e84ae1a831","Type":"ContainerStarted","Data":"a293ad4312b05b94b39ecd17e64e94b5f0f761ae2fb8d1c8dc5664b13c124d53"} Dec 08 19:44:05 crc kubenswrapper[5125]: I1208 19:44:05.769570 5125 scope.go:117] "RemoveContainer" containerID="6bdc10fec07acc719d7ead2586a52051ddf7420b85226d2ff84fb52463c47285" Dec 08 19:44:05 crc kubenswrapper[5125]: I1208 19:44:05.769715 5125 scope.go:117] "RemoveContainer" containerID="e8125468bd5c9ff72d32ee4e4d2b41e399306d84ff2ea5d78a71b401a975c6b3" Dec 08 19:44:05 crc kubenswrapper[5125]: I1208 19:44:05.770104 5125 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 19:44:05 crc kubenswrapper[5125]: I1208 19:44:05.779157 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4f5a7e7-22ed-47d2-bfea-b73f7df12065" path="/var/lib/kubelet/pods/c4f5a7e7-22ed-47d2-bfea-b73f7df12065/volumes" Dec 08 19:44:05 crc kubenswrapper[5125]: E1208 19:44:05.864147 5125 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4f5a7e7_22ed_47d2_bfea_b73f7df12065.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4f5a7e7_22ed_47d2_bfea_b73f7df12065.slice/crio-dd8868566379727404c6c052a4a94f95c6b2925dbe7b27b9bf7e61770788c612\": RecentStats: unable to find data in memory cache]" Dec 08 19:44:06 crc kubenswrapper[5125]: I1208 19:44:06.768770 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-zrf9r" event={"ID":"6d5356cb-6c8c-44ab-aab8-435362446754","Type":"ContainerStarted","Data":"1b311656984d124630703e9950ff70af18b77f7aefe7b8a25ee66a22fdd4f9e9"} Dec 08 19:44:06 crc kubenswrapper[5125]: I1208 19:44:06.771313 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-sdxpb" event={"ID":"2deaaeb6-cb08-4321-89e5-b16ad67380a8","Type":"ContainerStarted","Data":"0a1f2fb72fbcdcfd9bacd77fe8e264223cb6a780bb3c7111743cf86068e17006"} Dec 08 19:44:08 crc kubenswrapper[5125]: I1208 19:44:08.767105 5125 scope.go:117] "RemoveContainer" containerID="654a2f0ffaea929a4da38d3de091aedcc2e4c9229292e57eaf65f7d92d90ff4e" Dec 08 19:44:09 crc kubenswrapper[5125]: I1208 19:44:09.772852 5125 scope.go:117] "RemoveContainer" containerID="41122cfad71117885d436e6d3d2dd91958e9a700b5cb18cb209dc8b026720008" Dec 08 19:44:09 crc kubenswrapper[5125]: I1208 19:44:09.793691 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6ffdf7fb6b-hxssp" event={"ID":"cda9cd0f-6a7b-4b31-b58e-3c6af33f38dd","Type":"ContainerStarted","Data":"8de9e728d10b12303dfcdf4c0bc3a587fdcc2c51565b79b01df0172cf2f69e04"} Dec 08 19:44:10 crc kubenswrapper[5125]: I1208 19:44:10.804862 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7777459dd9-phb9r" event={"ID":"077469e1-469a-4837-b27e-a39eb253d98b","Type":"ContainerStarted","Data":"5927a71a0aeef76a7cac75a6b6d4d96f0a1be229c6a7395bd615229ca728be02"} Dec 08 19:44:16 crc kubenswrapper[5125]: E1208 19:44:16.038786 5125 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4f5a7e7_22ed_47d2_bfea_b73f7df12065.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4f5a7e7_22ed_47d2_bfea_b73f7df12065.slice/crio-dd8868566379727404c6c052a4a94f95c6b2925dbe7b27b9bf7e61770788c612\": RecentStats: unable to find data in memory cache]" Dec 08 19:44:20 crc kubenswrapper[5125]: I1208 19:44:20.759322 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Dec 08 19:44:20 crc kubenswrapper[5125]: I1208 19:44:20.760771 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c4f5a7e7-22ed-47d2-bfea-b73f7df12065" containerName="extract-utilities" Dec 08 19:44:20 crc kubenswrapper[5125]: I1208 19:44:20.760803 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f5a7e7-22ed-47d2-bfea-b73f7df12065" containerName="extract-utilities" Dec 08 19:44:20 crc kubenswrapper[5125]: I1208 19:44:20.760813 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c4f5a7e7-22ed-47d2-bfea-b73f7df12065" containerName="extract-content" Dec 08 19:44:20 crc kubenswrapper[5125]: I1208 19:44:20.760821 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f5a7e7-22ed-47d2-bfea-b73f7df12065" containerName="extract-content" Dec 08 19:44:20 crc kubenswrapper[5125]: I1208 19:44:20.760833 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aaa825d2-d84c-4a24-8e68-b718290d504d" containerName="extract-content" Dec 08 19:44:20 crc kubenswrapper[5125]: I1208 19:44:20.760841 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaa825d2-d84c-4a24-8e68-b718290d504d" containerName="extract-content" Dec 08 19:44:20 crc kubenswrapper[5125]: I1208 19:44:20.760859 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aaa825d2-d84c-4a24-8e68-b718290d504d" containerName="registry-server" Dec 08 19:44:20 crc kubenswrapper[5125]: I1208 19:44:20.760866 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaa825d2-d84c-4a24-8e68-b718290d504d" containerName="registry-server" Dec 08 19:44:20 crc kubenswrapper[5125]: I1208 19:44:20.760911 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aaa825d2-d84c-4a24-8e68-b718290d504d" containerName="extract-utilities" Dec 08 19:44:20 crc kubenswrapper[5125]: I1208 19:44:20.760919 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaa825d2-d84c-4a24-8e68-b718290d504d" containerName="extract-utilities" Dec 08 19:44:20 crc kubenswrapper[5125]: I1208 19:44:20.760943 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c4f5a7e7-22ed-47d2-bfea-b73f7df12065" containerName="registry-server" Dec 08 19:44:20 crc kubenswrapper[5125]: I1208 19:44:20.760950 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f5a7e7-22ed-47d2-bfea-b73f7df12065" containerName="registry-server" Dec 08 19:44:20 crc kubenswrapper[5125]: I1208 19:44:20.761090 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="aaa825d2-d84c-4a24-8e68-b718290d504d" containerName="registry-server" Dec 08 19:44:20 crc kubenswrapper[5125]: I1208 19:44:20.761110 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="c4f5a7e7-22ed-47d2-bfea-b73f7df12065" containerName="registry-server" Dec 08 19:44:20 crc kubenswrapper[5125]: I1208 19:44:20.775058 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Dec 08 19:44:20 crc kubenswrapper[5125]: I1208 19:44:20.775216 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Dec 08 19:44:20 crc kubenswrapper[5125]: I1208 19:44:20.776873 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-selfsigned\"" Dec 08 19:44:20 crc kubenswrapper[5125]: I1208 19:44:20.780798 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"qdr-test-config\"" Dec 08 19:44:20 crc kubenswrapper[5125]: I1208 19:44:20.811284 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hcm9\" (UniqueName: \"kubernetes.io/projected/35f1179c-356e-46e7-9afc-474ba5233dcc-kube-api-access-9hcm9\") pod \"qdr-test\" (UID: \"35f1179c-356e-46e7-9afc-474ba5233dcc\") " pod="service-telemetry/qdr-test" Dec 08 19:44:20 crc kubenswrapper[5125]: I1208 19:44:20.811446 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/35f1179c-356e-46e7-9afc-474ba5233dcc-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"35f1179c-356e-46e7-9afc-474ba5233dcc\") " pod="service-telemetry/qdr-test" Dec 08 19:44:20 crc kubenswrapper[5125]: I1208 19:44:20.811519 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/35f1179c-356e-46e7-9afc-474ba5233dcc-qdr-test-config\") pod \"qdr-test\" (UID: \"35f1179c-356e-46e7-9afc-474ba5233dcc\") " pod="service-telemetry/qdr-test" Dec 08 19:44:20 crc kubenswrapper[5125]: I1208 19:44:20.912594 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9hcm9\" (UniqueName: \"kubernetes.io/projected/35f1179c-356e-46e7-9afc-474ba5233dcc-kube-api-access-9hcm9\") pod \"qdr-test\" (UID: \"35f1179c-356e-46e7-9afc-474ba5233dcc\") " pod="service-telemetry/qdr-test" Dec 08 19:44:20 crc kubenswrapper[5125]: I1208 19:44:20.912725 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/35f1179c-356e-46e7-9afc-474ba5233dcc-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"35f1179c-356e-46e7-9afc-474ba5233dcc\") " pod="service-telemetry/qdr-test" Dec 08 19:44:20 crc kubenswrapper[5125]: I1208 19:44:20.912770 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/35f1179c-356e-46e7-9afc-474ba5233dcc-qdr-test-config\") pod \"qdr-test\" (UID: \"35f1179c-356e-46e7-9afc-474ba5233dcc\") " pod="service-telemetry/qdr-test" Dec 08 19:44:20 crc kubenswrapper[5125]: I1208 19:44:20.913624 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/35f1179c-356e-46e7-9afc-474ba5233dcc-qdr-test-config\") pod \"qdr-test\" (UID: \"35f1179c-356e-46e7-9afc-474ba5233dcc\") " pod="service-telemetry/qdr-test" Dec 08 19:44:20 crc kubenswrapper[5125]: I1208 19:44:20.919530 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/35f1179c-356e-46e7-9afc-474ba5233dcc-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"35f1179c-356e-46e7-9afc-474ba5233dcc\") " pod="service-telemetry/qdr-test" Dec 08 19:44:20 crc kubenswrapper[5125]: I1208 19:44:20.932949 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hcm9\" (UniqueName: \"kubernetes.io/projected/35f1179c-356e-46e7-9afc-474ba5233dcc-kube-api-access-9hcm9\") pod \"qdr-test\" (UID: \"35f1179c-356e-46e7-9afc-474ba5233dcc\") " pod="service-telemetry/qdr-test" Dec 08 19:44:21 crc kubenswrapper[5125]: I1208 19:44:21.094347 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Dec 08 19:44:21 crc kubenswrapper[5125]: I1208 19:44:21.288750 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Dec 08 19:44:21 crc kubenswrapper[5125]: I1208 19:44:21.908392 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"35f1179c-356e-46e7-9afc-474ba5233dcc","Type":"ContainerStarted","Data":"b3f06a86f23e1eda5b89b9f9900caa2fa32e16f865b1690924018fed34cdad03"} Dec 08 19:44:26 crc kubenswrapper[5125]: E1208 19:44:26.200779 5125 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4f5a7e7_22ed_47d2_bfea_b73f7df12065.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4f5a7e7_22ed_47d2_bfea_b73f7df12065.slice/crio-dd8868566379727404c6c052a4a94f95c6b2925dbe7b27b9bf7e61770788c612\": RecentStats: unable to find data in memory cache]" Dec 08 19:44:27 crc kubenswrapper[5125]: I1208 19:44:27.959141 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"35f1179c-356e-46e7-9afc-474ba5233dcc","Type":"ContainerStarted","Data":"4e10c6dacd942654ebf4a585ce6a7992bc432787a2144f7f88249a8e196d9819"} Dec 08 19:44:27 crc kubenswrapper[5125]: I1208 19:44:27.981074 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=1.718287456 podStartE2EDuration="7.981058468s" podCreationTimestamp="2025-12-08 19:44:20 +0000 UTC" firstStartedPulling="2025-12-08 19:44:21.297492677 +0000 UTC m=+918.067982951" lastFinishedPulling="2025-12-08 19:44:27.560263669 +0000 UTC m=+924.330753963" observedRunningTime="2025-12-08 19:44:27.977961833 +0000 UTC m=+924.748452117" watchObservedRunningTime="2025-12-08 19:44:27.981058468 +0000 UTC m=+924.751548742" Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.305373 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-hrw9w"] Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.442549 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-hrw9w"] Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.442750 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-hrw9w" Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.445418 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-entrypoint-script\"" Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.448693 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-sensubility-config\"" Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.448783 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-healthcheck-log\"" Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.449076 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-config\"" Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.449299 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-entrypoint-script\"" Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.449668 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-publisher\"" Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.538167 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-hrw9w\" (UID: \"f4834735-4658-450c-b286-08fd815ceb02\") " pod="service-telemetry/stf-smoketest-smoke1-hrw9w" Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.538239 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-ceilometer-publisher\") pod \"stf-smoketest-smoke1-hrw9w\" (UID: \"f4834735-4658-450c-b286-08fd815ceb02\") " pod="service-telemetry/stf-smoketest-smoke1-hrw9w" Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.538289 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-collectd-config\") pod \"stf-smoketest-smoke1-hrw9w\" (UID: \"f4834735-4658-450c-b286-08fd815ceb02\") " pod="service-telemetry/stf-smoketest-smoke1-hrw9w" Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.538336 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-sensubility-config\") pod \"stf-smoketest-smoke1-hrw9w\" (UID: \"f4834735-4658-450c-b286-08fd815ceb02\") " pod="service-telemetry/stf-smoketest-smoke1-hrw9w" Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.538389 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bfp8\" (UniqueName: \"kubernetes.io/projected/f4834735-4658-450c-b286-08fd815ceb02-kube-api-access-6bfp8\") pod \"stf-smoketest-smoke1-hrw9w\" (UID: \"f4834735-4658-450c-b286-08fd815ceb02\") " pod="service-telemetry/stf-smoketest-smoke1-hrw9w" Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.538488 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-hrw9w\" (UID: \"f4834735-4658-450c-b286-08fd815ceb02\") " pod="service-telemetry/stf-smoketest-smoke1-hrw9w" Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.538576 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-healthcheck-log\") pod \"stf-smoketest-smoke1-hrw9w\" (UID: \"f4834735-4658-450c-b286-08fd815ceb02\") " pod="service-telemetry/stf-smoketest-smoke1-hrw9w" Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.639573 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-hrw9w\" (UID: \"f4834735-4658-450c-b286-08fd815ceb02\") " pod="service-telemetry/stf-smoketest-smoke1-hrw9w" Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.639684 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-ceilometer-publisher\") pod \"stf-smoketest-smoke1-hrw9w\" (UID: \"f4834735-4658-450c-b286-08fd815ceb02\") " pod="service-telemetry/stf-smoketest-smoke1-hrw9w" Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.639722 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-collectd-config\") pod \"stf-smoketest-smoke1-hrw9w\" (UID: \"f4834735-4658-450c-b286-08fd815ceb02\") " pod="service-telemetry/stf-smoketest-smoke1-hrw9w" Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.639750 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-sensubility-config\") pod \"stf-smoketest-smoke1-hrw9w\" (UID: \"f4834735-4658-450c-b286-08fd815ceb02\") " pod="service-telemetry/stf-smoketest-smoke1-hrw9w" Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.639777 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6bfp8\" (UniqueName: \"kubernetes.io/projected/f4834735-4658-450c-b286-08fd815ceb02-kube-api-access-6bfp8\") pod \"stf-smoketest-smoke1-hrw9w\" (UID: \"f4834735-4658-450c-b286-08fd815ceb02\") " pod="service-telemetry/stf-smoketest-smoke1-hrw9w" Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.639857 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-hrw9w\" (UID: \"f4834735-4658-450c-b286-08fd815ceb02\") " pod="service-telemetry/stf-smoketest-smoke1-hrw9w" Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.639895 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-healthcheck-log\") pod \"stf-smoketest-smoke1-hrw9w\" (UID: \"f4834735-4658-450c-b286-08fd815ceb02\") " pod="service-telemetry/stf-smoketest-smoke1-hrw9w" Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.640975 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-hrw9w\" (UID: \"f4834735-4658-450c-b286-08fd815ceb02\") " pod="service-telemetry/stf-smoketest-smoke1-hrw9w" Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.641323 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-healthcheck-log\") pod \"stf-smoketest-smoke1-hrw9w\" (UID: \"f4834735-4658-450c-b286-08fd815ceb02\") " pod="service-telemetry/stf-smoketest-smoke1-hrw9w" Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.641463 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-hrw9w\" (UID: \"f4834735-4658-450c-b286-08fd815ceb02\") " pod="service-telemetry/stf-smoketest-smoke1-hrw9w" Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.641888 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-collectd-config\") pod \"stf-smoketest-smoke1-hrw9w\" (UID: \"f4834735-4658-450c-b286-08fd815ceb02\") " pod="service-telemetry/stf-smoketest-smoke1-hrw9w" Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.641968 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-ceilometer-publisher\") pod \"stf-smoketest-smoke1-hrw9w\" (UID: \"f4834735-4658-450c-b286-08fd815ceb02\") " pod="service-telemetry/stf-smoketest-smoke1-hrw9w" Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.642399 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-sensubility-config\") pod \"stf-smoketest-smoke1-hrw9w\" (UID: \"f4834735-4658-450c-b286-08fd815ceb02\") " pod="service-telemetry/stf-smoketest-smoke1-hrw9w" Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.667100 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bfp8\" (UniqueName: \"kubernetes.io/projected/f4834735-4658-450c-b286-08fd815ceb02-kube-api-access-6bfp8\") pod \"stf-smoketest-smoke1-hrw9w\" (UID: \"f4834735-4658-450c-b286-08fd815ceb02\") " pod="service-telemetry/stf-smoketest-smoke1-hrw9w" Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.763120 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-hrw9w" Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.776090 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.786506 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.790886 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.843623 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jfj9\" (UniqueName: \"kubernetes.io/projected/216f472a-6782-4ffd-91bb-580d68fbe86c-kube-api-access-6jfj9\") pod \"curl\" (UID: \"216f472a-6782-4ffd-91bb-580d68fbe86c\") " pod="service-telemetry/curl" Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.944799 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6jfj9\" (UniqueName: \"kubernetes.io/projected/216f472a-6782-4ffd-91bb-580d68fbe86c-kube-api-access-6jfj9\") pod \"curl\" (UID: \"216f472a-6782-4ffd-91bb-580d68fbe86c\") " pod="service-telemetry/curl" Dec 08 19:44:28 crc kubenswrapper[5125]: I1208 19:44:28.964290 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jfj9\" (UniqueName: \"kubernetes.io/projected/216f472a-6782-4ffd-91bb-580d68fbe86c-kube-api-access-6jfj9\") pod \"curl\" (UID: \"216f472a-6782-4ffd-91bb-580d68fbe86c\") " pod="service-telemetry/curl" Dec 08 19:44:29 crc kubenswrapper[5125]: I1208 19:44:29.155124 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 08 19:44:29 crc kubenswrapper[5125]: I1208 19:44:29.199957 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-hrw9w"] Dec 08 19:44:29 crc kubenswrapper[5125]: W1208 19:44:29.212992 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4834735_4658_450c_b286_08fd815ceb02.slice/crio-1edd3a8bd0c739242c3a6b97c0915e542113934fa195e2877f239e596ed3fdb1 WatchSource:0}: Error finding container 1edd3a8bd0c739242c3a6b97c0915e542113934fa195e2877f239e596ed3fdb1: Status 404 returned error can't find the container with id 1edd3a8bd0c739242c3a6b97c0915e542113934fa195e2877f239e596ed3fdb1 Dec 08 19:44:29 crc kubenswrapper[5125]: I1208 19:44:29.336818 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Dec 08 19:44:29 crc kubenswrapper[5125]: W1208 19:44:29.344385 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod216f472a_6782_4ffd_91bb_580d68fbe86c.slice/crio-cbb4096267874eb8e748f5fc73869559bf3c9355191e6ee43f14227efcdffa81 WatchSource:0}: Error finding container cbb4096267874eb8e748f5fc73869559bf3c9355191e6ee43f14227efcdffa81: Status 404 returned error can't find the container with id cbb4096267874eb8e748f5fc73869559bf3c9355191e6ee43f14227efcdffa81 Dec 08 19:44:29 crc kubenswrapper[5125]: I1208 19:44:29.990562 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"216f472a-6782-4ffd-91bb-580d68fbe86c","Type":"ContainerStarted","Data":"cbb4096267874eb8e748f5fc73869559bf3c9355191e6ee43f14227efcdffa81"} Dec 08 19:44:29 crc kubenswrapper[5125]: I1208 19:44:29.994283 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hrw9w" event={"ID":"f4834735-4658-450c-b286-08fd815ceb02","Type":"ContainerStarted","Data":"1edd3a8bd0c739242c3a6b97c0915e542113934fa195e2877f239e596ed3fdb1"} Dec 08 19:44:32 crc kubenswrapper[5125]: I1208 19:44:32.013002 5125 generic.go:358] "Generic (PLEG): container finished" podID="216f472a-6782-4ffd-91bb-580d68fbe86c" containerID="0b59a14a6ae834bac2c6dd61f68dcf275331cbcc68871264494f77d06236bdf2" exitCode=0 Dec 08 19:44:32 crc kubenswrapper[5125]: I1208 19:44:32.013094 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"216f472a-6782-4ffd-91bb-580d68fbe86c","Type":"ContainerDied","Data":"0b59a14a6ae834bac2c6dd61f68dcf275331cbcc68871264494f77d06236bdf2"} Dec 08 19:44:36 crc kubenswrapper[5125]: E1208 19:44:36.347415 5125 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4f5a7e7_22ed_47d2_bfea_b73f7df12065.slice/crio-dd8868566379727404c6c052a4a94f95c6b2925dbe7b27b9bf7e61770788c612\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4f5a7e7_22ed_47d2_bfea_b73f7df12065.slice\": RecentStats: unable to find data in memory cache]" Dec 08 19:44:36 crc kubenswrapper[5125]: I1208 19:44:36.433831 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 08 19:44:36 crc kubenswrapper[5125]: I1208 19:44:36.559785 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jfj9\" (UniqueName: \"kubernetes.io/projected/216f472a-6782-4ffd-91bb-580d68fbe86c-kube-api-access-6jfj9\") pod \"216f472a-6782-4ffd-91bb-580d68fbe86c\" (UID: \"216f472a-6782-4ffd-91bb-580d68fbe86c\") " Dec 08 19:44:36 crc kubenswrapper[5125]: I1208 19:44:36.569056 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/216f472a-6782-4ffd-91bb-580d68fbe86c-kube-api-access-6jfj9" (OuterVolumeSpecName: "kube-api-access-6jfj9") pod "216f472a-6782-4ffd-91bb-580d68fbe86c" (UID: "216f472a-6782-4ffd-91bb-580d68fbe86c"). InnerVolumeSpecName "kube-api-access-6jfj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:44:36 crc kubenswrapper[5125]: I1208 19:44:36.579122 5125 ???:1] "http: TLS handshake error from 192.168.126.11:54004: no serving certificate available for the kubelet" Dec 08 19:44:36 crc kubenswrapper[5125]: I1208 19:44:36.662637 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6jfj9\" (UniqueName: \"kubernetes.io/projected/216f472a-6782-4ffd-91bb-580d68fbe86c-kube-api-access-6jfj9\") on node \"crc\" DevicePath \"\"" Dec 08 19:44:36 crc kubenswrapper[5125]: I1208 19:44:36.881207 5125 ???:1] "http: TLS handshake error from 192.168.126.11:54008: no serving certificate available for the kubelet" Dec 08 19:44:37 crc kubenswrapper[5125]: I1208 19:44:37.052980 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"216f472a-6782-4ffd-91bb-580d68fbe86c","Type":"ContainerDied","Data":"cbb4096267874eb8e748f5fc73869559bf3c9355191e6ee43f14227efcdffa81"} Dec 08 19:44:37 crc kubenswrapper[5125]: I1208 19:44:37.053088 5125 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbb4096267874eb8e748f5fc73869559bf3c9355191e6ee43f14227efcdffa81" Dec 08 19:44:37 crc kubenswrapper[5125]: I1208 19:44:37.053149 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 08 19:44:38 crc kubenswrapper[5125]: I1208 19:44:38.060745 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hrw9w" event={"ID":"f4834735-4658-450c-b286-08fd815ceb02","Type":"ContainerStarted","Data":"ab4c0ccc9cc321aecab918ac1c1c7d6364f127bf7e06bd4ff1196e7ae77c2646"} Dec 08 19:44:46 crc kubenswrapper[5125]: E1208 19:44:46.500935 5125 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4f5a7e7_22ed_47d2_bfea_b73f7df12065.slice/crio-dd8868566379727404c6c052a4a94f95c6b2925dbe7b27b9bf7e61770788c612\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4f5a7e7_22ed_47d2_bfea_b73f7df12065.slice\": RecentStats: unable to find data in memory cache]" Dec 08 19:44:50 crc kubenswrapper[5125]: I1208 19:44:50.142251 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hrw9w" event={"ID":"f4834735-4658-450c-b286-08fd815ceb02","Type":"ContainerStarted","Data":"b4d3e72012c704eff9c75c52a8e41716208b9d9037d2aecc435181b109a8213c"} Dec 08 19:44:50 crc kubenswrapper[5125]: I1208 19:44:50.169014 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-hrw9w" podStartSLOduration=1.579293427 podStartE2EDuration="22.168998178s" podCreationTimestamp="2025-12-08 19:44:28 +0000 UTC" firstStartedPulling="2025-12-08 19:44:29.215982927 +0000 UTC m=+925.986473201" lastFinishedPulling="2025-12-08 19:44:49.805687678 +0000 UTC m=+946.576177952" observedRunningTime="2025-12-08 19:44:50.164441403 +0000 UTC m=+946.934931697" watchObservedRunningTime="2025-12-08 19:44:50.168998178 +0000 UTC m=+946.939488452" Dec 08 19:44:51 crc kubenswrapper[5125]: I1208 19:44:51.102124 5125 patch_prober.go:28] interesting pod/machine-config-daemon-slhjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:44:51 crc kubenswrapper[5125]: I1208 19:44:51.102205 5125 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" podUID="d8cea827-b8e3-4d92-adea-df0afd2397da" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:44:56 crc kubenswrapper[5125]: E1208 19:44:56.693503 5125 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4f5a7e7_22ed_47d2_bfea_b73f7df12065.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4f5a7e7_22ed_47d2_bfea_b73f7df12065.slice/crio-dd8868566379727404c6c052a4a94f95c6b2925dbe7b27b9bf7e61770788c612\": RecentStats: unable to find data in memory cache]" Dec 08 19:45:00 crc kubenswrapper[5125]: I1208 19:45:00.155090 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420385-rcxqj"] Dec 08 19:45:00 crc kubenswrapper[5125]: I1208 19:45:00.156002 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="216f472a-6782-4ffd-91bb-580d68fbe86c" containerName="curl" Dec 08 19:45:00 crc kubenswrapper[5125]: I1208 19:45:00.156014 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="216f472a-6782-4ffd-91bb-580d68fbe86c" containerName="curl" Dec 08 19:45:00 crc kubenswrapper[5125]: I1208 19:45:00.156141 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="216f472a-6782-4ffd-91bb-580d68fbe86c" containerName="curl" Dec 08 19:45:00 crc kubenswrapper[5125]: I1208 19:45:00.162732 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-rcxqj" Dec 08 19:45:00 crc kubenswrapper[5125]: I1208 19:45:00.164427 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 08 19:45:00 crc kubenswrapper[5125]: I1208 19:45:00.167229 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420385-rcxqj"] Dec 08 19:45:00 crc kubenswrapper[5125]: I1208 19:45:00.223534 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 08 19:45:00 crc kubenswrapper[5125]: I1208 19:45:00.347749 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvgzc\" (UniqueName: \"kubernetes.io/projected/a370250a-63fe-4722-8041-7c61ffc91ef7-kube-api-access-cvgzc\") pod \"collect-profiles-29420385-rcxqj\" (UID: \"a370250a-63fe-4722-8041-7c61ffc91ef7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-rcxqj" Dec 08 19:45:00 crc kubenswrapper[5125]: I1208 19:45:00.348373 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a370250a-63fe-4722-8041-7c61ffc91ef7-config-volume\") pod \"collect-profiles-29420385-rcxqj\" (UID: \"a370250a-63fe-4722-8041-7c61ffc91ef7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-rcxqj" Dec 08 19:45:00 crc kubenswrapper[5125]: I1208 19:45:00.348559 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a370250a-63fe-4722-8041-7c61ffc91ef7-secret-volume\") pod \"collect-profiles-29420385-rcxqj\" (UID: \"a370250a-63fe-4722-8041-7c61ffc91ef7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-rcxqj" Dec 08 19:45:00 crc kubenswrapper[5125]: I1208 19:45:00.449584 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cvgzc\" (UniqueName: \"kubernetes.io/projected/a370250a-63fe-4722-8041-7c61ffc91ef7-kube-api-access-cvgzc\") pod \"collect-profiles-29420385-rcxqj\" (UID: \"a370250a-63fe-4722-8041-7c61ffc91ef7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-rcxqj" Dec 08 19:45:00 crc kubenswrapper[5125]: I1208 19:45:00.449668 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a370250a-63fe-4722-8041-7c61ffc91ef7-config-volume\") pod \"collect-profiles-29420385-rcxqj\" (UID: \"a370250a-63fe-4722-8041-7c61ffc91ef7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-rcxqj" Dec 08 19:45:00 crc kubenswrapper[5125]: I1208 19:45:00.449743 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a370250a-63fe-4722-8041-7c61ffc91ef7-secret-volume\") pod \"collect-profiles-29420385-rcxqj\" (UID: \"a370250a-63fe-4722-8041-7c61ffc91ef7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-rcxqj" Dec 08 19:45:00 crc kubenswrapper[5125]: I1208 19:45:00.450733 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a370250a-63fe-4722-8041-7c61ffc91ef7-config-volume\") pod \"collect-profiles-29420385-rcxqj\" (UID: \"a370250a-63fe-4722-8041-7c61ffc91ef7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-rcxqj" Dec 08 19:45:00 crc kubenswrapper[5125]: I1208 19:45:00.460281 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a370250a-63fe-4722-8041-7c61ffc91ef7-secret-volume\") pod \"collect-profiles-29420385-rcxqj\" (UID: \"a370250a-63fe-4722-8041-7c61ffc91ef7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-rcxqj" Dec 08 19:45:00 crc kubenswrapper[5125]: I1208 19:45:00.467872 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvgzc\" (UniqueName: \"kubernetes.io/projected/a370250a-63fe-4722-8041-7c61ffc91ef7-kube-api-access-cvgzc\") pod \"collect-profiles-29420385-rcxqj\" (UID: \"a370250a-63fe-4722-8041-7c61ffc91ef7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-rcxqj" Dec 08 19:45:00 crc kubenswrapper[5125]: I1208 19:45:00.553114 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-rcxqj" Dec 08 19:45:00 crc kubenswrapper[5125]: I1208 19:45:00.985667 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420385-rcxqj"] Dec 08 19:45:00 crc kubenswrapper[5125]: W1208 19:45:00.996764 5125 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda370250a_63fe_4722_8041_7c61ffc91ef7.slice/crio-956e5537a274a473b699a580bb0dc43871f3ba4b7698c9f56e2f40eeb851cd8f WatchSource:0}: Error finding container 956e5537a274a473b699a580bb0dc43871f3ba4b7698c9f56e2f40eeb851cd8f: Status 404 returned error can't find the container with id 956e5537a274a473b699a580bb0dc43871f3ba4b7698c9f56e2f40eeb851cd8f Dec 08 19:45:01 crc kubenswrapper[5125]: I1208 19:45:01.237079 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-rcxqj" event={"ID":"a370250a-63fe-4722-8041-7c61ffc91ef7","Type":"ContainerStarted","Data":"f7a311bfd910ff3073d5988d17be480673ddee09b14e638b4d82ca1c86a8e780"} Dec 08 19:45:01 crc kubenswrapper[5125]: I1208 19:45:01.237429 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-rcxqj" event={"ID":"a370250a-63fe-4722-8041-7c61ffc91ef7","Type":"ContainerStarted","Data":"956e5537a274a473b699a580bb0dc43871f3ba4b7698c9f56e2f40eeb851cd8f"} Dec 08 19:45:01 crc kubenswrapper[5125]: I1208 19:45:01.251810 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-rcxqj" podStartSLOduration=1.251792567 podStartE2EDuration="1.251792567s" podCreationTimestamp="2025-12-08 19:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:45:01.25010226 +0000 UTC m=+958.020592554" watchObservedRunningTime="2025-12-08 19:45:01.251792567 +0000 UTC m=+958.022282841" Dec 08 19:45:02 crc kubenswrapper[5125]: I1208 19:45:02.247282 5125 generic.go:358] "Generic (PLEG): container finished" podID="a370250a-63fe-4722-8041-7c61ffc91ef7" containerID="f7a311bfd910ff3073d5988d17be480673ddee09b14e638b4d82ca1c86a8e780" exitCode=0 Dec 08 19:45:02 crc kubenswrapper[5125]: I1208 19:45:02.247421 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-rcxqj" event={"ID":"a370250a-63fe-4722-8041-7c61ffc91ef7","Type":"ContainerDied","Data":"f7a311bfd910ff3073d5988d17be480673ddee09b14e638b4d82ca1c86a8e780"} Dec 08 19:45:03 crc kubenswrapper[5125]: I1208 19:45:03.530176 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-rcxqj" Dec 08 19:45:03 crc kubenswrapper[5125]: I1208 19:45:03.594768 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvgzc\" (UniqueName: \"kubernetes.io/projected/a370250a-63fe-4722-8041-7c61ffc91ef7-kube-api-access-cvgzc\") pod \"a370250a-63fe-4722-8041-7c61ffc91ef7\" (UID: \"a370250a-63fe-4722-8041-7c61ffc91ef7\") " Dec 08 19:45:03 crc kubenswrapper[5125]: I1208 19:45:03.594866 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a370250a-63fe-4722-8041-7c61ffc91ef7-secret-volume\") pod \"a370250a-63fe-4722-8041-7c61ffc91ef7\" (UID: \"a370250a-63fe-4722-8041-7c61ffc91ef7\") " Dec 08 19:45:03 crc kubenswrapper[5125]: I1208 19:45:03.595091 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a370250a-63fe-4722-8041-7c61ffc91ef7-config-volume\") pod \"a370250a-63fe-4722-8041-7c61ffc91ef7\" (UID: \"a370250a-63fe-4722-8041-7c61ffc91ef7\") " Dec 08 19:45:03 crc kubenswrapper[5125]: I1208 19:45:03.595502 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a370250a-63fe-4722-8041-7c61ffc91ef7-config-volume" (OuterVolumeSpecName: "config-volume") pod "a370250a-63fe-4722-8041-7c61ffc91ef7" (UID: "a370250a-63fe-4722-8041-7c61ffc91ef7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:45:03 crc kubenswrapper[5125]: I1208 19:45:03.609490 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a370250a-63fe-4722-8041-7c61ffc91ef7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a370250a-63fe-4722-8041-7c61ffc91ef7" (UID: "a370250a-63fe-4722-8041-7c61ffc91ef7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:45:03 crc kubenswrapper[5125]: I1208 19:45:03.609542 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a370250a-63fe-4722-8041-7c61ffc91ef7-kube-api-access-cvgzc" (OuterVolumeSpecName: "kube-api-access-cvgzc") pod "a370250a-63fe-4722-8041-7c61ffc91ef7" (UID: "a370250a-63fe-4722-8041-7c61ffc91ef7"). InnerVolumeSpecName "kube-api-access-cvgzc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:45:03 crc kubenswrapper[5125]: I1208 19:45:03.696800 5125 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a370250a-63fe-4722-8041-7c61ffc91ef7-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 19:45:03 crc kubenswrapper[5125]: I1208 19:45:03.696851 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cvgzc\" (UniqueName: \"kubernetes.io/projected/a370250a-63fe-4722-8041-7c61ffc91ef7-kube-api-access-cvgzc\") on node \"crc\" DevicePath \"\"" Dec 08 19:45:03 crc kubenswrapper[5125]: I1208 19:45:03.696866 5125 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a370250a-63fe-4722-8041-7c61ffc91ef7-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 08 19:45:04 crc kubenswrapper[5125]: I1208 19:45:04.266952 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-rcxqj" event={"ID":"a370250a-63fe-4722-8041-7c61ffc91ef7","Type":"ContainerDied","Data":"956e5537a274a473b699a580bb0dc43871f3ba4b7698c9f56e2f40eeb851cd8f"} Dec 08 19:45:04 crc kubenswrapper[5125]: I1208 19:45:04.267002 5125 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="956e5537a274a473b699a580bb0dc43871f3ba4b7698c9f56e2f40eeb851cd8f" Dec 08 19:45:04 crc kubenswrapper[5125]: I1208 19:45:04.267102 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420385-rcxqj" Dec 08 19:45:07 crc kubenswrapper[5125]: I1208 19:45:07.111999 5125 ???:1] "http: TLS handshake error from 192.168.126.11:52270: no serving certificate available for the kubelet" Dec 08 19:45:12 crc kubenswrapper[5125]: I1208 19:45:12.329700 5125 generic.go:358] "Generic (PLEG): container finished" podID="f4834735-4658-450c-b286-08fd815ceb02" containerID="ab4c0ccc9cc321aecab918ac1c1c7d6364f127bf7e06bd4ff1196e7ae77c2646" exitCode=0 Dec 08 19:45:12 crc kubenswrapper[5125]: I1208 19:45:12.329801 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hrw9w" event={"ID":"f4834735-4658-450c-b286-08fd815ceb02","Type":"ContainerDied","Data":"ab4c0ccc9cc321aecab918ac1c1c7d6364f127bf7e06bd4ff1196e7ae77c2646"} Dec 08 19:45:12 crc kubenswrapper[5125]: I1208 19:45:12.330658 5125 scope.go:117] "RemoveContainer" containerID="ab4c0ccc9cc321aecab918ac1c1c7d6364f127bf7e06bd4ff1196e7ae77c2646" Dec 08 19:45:21 crc kubenswrapper[5125]: I1208 19:45:21.100970 5125 patch_prober.go:28] interesting pod/machine-config-daemon-slhjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:45:21 crc kubenswrapper[5125]: I1208 19:45:21.101576 5125 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" podUID="d8cea827-b8e3-4d92-adea-df0afd2397da" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:45:22 crc kubenswrapper[5125]: I1208 19:45:22.413730 5125 generic.go:358] "Generic (PLEG): container finished" podID="f4834735-4658-450c-b286-08fd815ceb02" containerID="b4d3e72012c704eff9c75c52a8e41716208b9d9037d2aecc435181b109a8213c" exitCode=0 Dec 08 19:45:22 crc kubenswrapper[5125]: I1208 19:45:22.413785 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hrw9w" event={"ID":"f4834735-4658-450c-b286-08fd815ceb02","Type":"ContainerDied","Data":"b4d3e72012c704eff9c75c52a8e41716208b9d9037d2aecc435181b109a8213c"} Dec 08 19:45:23 crc kubenswrapper[5125]: I1208 19:45:23.718767 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-hrw9w" Dec 08 19:45:23 crc kubenswrapper[5125]: I1208 19:45:23.797723 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-ceilometer-publisher\") pod \"f4834735-4658-450c-b286-08fd815ceb02\" (UID: \"f4834735-4658-450c-b286-08fd815ceb02\") " Dec 08 19:45:23 crc kubenswrapper[5125]: I1208 19:45:23.797936 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-healthcheck-log\") pod \"f4834735-4658-450c-b286-08fd815ceb02\" (UID: \"f4834735-4658-450c-b286-08fd815ceb02\") " Dec 08 19:45:23 crc kubenswrapper[5125]: I1208 19:45:23.797982 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-ceilometer-entrypoint-script\") pod \"f4834735-4658-450c-b286-08fd815ceb02\" (UID: \"f4834735-4658-450c-b286-08fd815ceb02\") " Dec 08 19:45:23 crc kubenswrapper[5125]: I1208 19:45:23.798031 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-sensubility-config\") pod \"f4834735-4658-450c-b286-08fd815ceb02\" (UID: \"f4834735-4658-450c-b286-08fd815ceb02\") " Dec 08 19:45:23 crc kubenswrapper[5125]: I1208 19:45:23.798093 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-collectd-entrypoint-script\") pod \"f4834735-4658-450c-b286-08fd815ceb02\" (UID: \"f4834735-4658-450c-b286-08fd815ceb02\") " Dec 08 19:45:23 crc kubenswrapper[5125]: I1208 19:45:23.798130 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-collectd-config\") pod \"f4834735-4658-450c-b286-08fd815ceb02\" (UID: \"f4834735-4658-450c-b286-08fd815ceb02\") " Dec 08 19:45:23 crc kubenswrapper[5125]: I1208 19:45:23.798177 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bfp8\" (UniqueName: \"kubernetes.io/projected/f4834735-4658-450c-b286-08fd815ceb02-kube-api-access-6bfp8\") pod \"f4834735-4658-450c-b286-08fd815ceb02\" (UID: \"f4834735-4658-450c-b286-08fd815ceb02\") " Dec 08 19:45:23 crc kubenswrapper[5125]: I1208 19:45:23.807750 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4834735-4658-450c-b286-08fd815ceb02-kube-api-access-6bfp8" (OuterVolumeSpecName: "kube-api-access-6bfp8") pod "f4834735-4658-450c-b286-08fd815ceb02" (UID: "f4834735-4658-450c-b286-08fd815ceb02"). InnerVolumeSpecName "kube-api-access-6bfp8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:45:23 crc kubenswrapper[5125]: I1208 19:45:23.815743 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "f4834735-4658-450c-b286-08fd815ceb02" (UID: "f4834735-4658-450c-b286-08fd815ceb02"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:45:23 crc kubenswrapper[5125]: I1208 19:45:23.816926 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "f4834735-4658-450c-b286-08fd815ceb02" (UID: "f4834735-4658-450c-b286-08fd815ceb02"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:45:23 crc kubenswrapper[5125]: I1208 19:45:23.818585 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "f4834735-4658-450c-b286-08fd815ceb02" (UID: "f4834735-4658-450c-b286-08fd815ceb02"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:45:23 crc kubenswrapper[5125]: I1208 19:45:23.819423 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "f4834735-4658-450c-b286-08fd815ceb02" (UID: "f4834735-4658-450c-b286-08fd815ceb02"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:45:23 crc kubenswrapper[5125]: I1208 19:45:23.820382 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "f4834735-4658-450c-b286-08fd815ceb02" (UID: "f4834735-4658-450c-b286-08fd815ceb02"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:45:23 crc kubenswrapper[5125]: I1208 19:45:23.832735 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "f4834735-4658-450c-b286-08fd815ceb02" (UID: "f4834735-4658-450c-b286-08fd815ceb02"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:45:23 crc kubenswrapper[5125]: I1208 19:45:23.900474 5125 reconciler_common.go:299] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-healthcheck-log\") on node \"crc\" DevicePath \"\"" Dec 08 19:45:23 crc kubenswrapper[5125]: I1208 19:45:23.900515 5125 reconciler_common.go:299] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Dec 08 19:45:23 crc kubenswrapper[5125]: I1208 19:45:23.900537 5125 reconciler_common.go:299] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-sensubility-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:45:23 crc kubenswrapper[5125]: I1208 19:45:23.900551 5125 reconciler_common.go:299] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Dec 08 19:45:23 crc kubenswrapper[5125]: I1208 19:45:23.900562 5125 reconciler_common.go:299] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-collectd-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:45:23 crc kubenswrapper[5125]: I1208 19:45:23.900573 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6bfp8\" (UniqueName: \"kubernetes.io/projected/f4834735-4658-450c-b286-08fd815ceb02-kube-api-access-6bfp8\") on node \"crc\" DevicePath \"\"" Dec 08 19:45:23 crc kubenswrapper[5125]: I1208 19:45:23.900584 5125 reconciler_common.go:299] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/f4834735-4658-450c-b286-08fd815ceb02-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Dec 08 19:45:24 crc kubenswrapper[5125]: I1208 19:45:24.429181 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hrw9w" event={"ID":"f4834735-4658-450c-b286-08fd815ceb02","Type":"ContainerDied","Data":"1edd3a8bd0c739242c3a6b97c0915e542113934fa195e2877f239e596ed3fdb1"} Dec 08 19:45:24 crc kubenswrapper[5125]: I1208 19:45:24.429650 5125 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1edd3a8bd0c739242c3a6b97c0915e542113934fa195e2877f239e596ed3fdb1" Dec 08 19:45:24 crc kubenswrapper[5125]: I1208 19:45:24.429277 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-hrw9w" Dec 08 19:45:37 crc kubenswrapper[5125]: I1208 19:45:37.269126 5125 ???:1] "http: TLS handshake error from 192.168.126.11:32994: no serving certificate available for the kubelet" Dec 08 19:45:51 crc kubenswrapper[5125]: I1208 19:45:51.101542 5125 patch_prober.go:28] interesting pod/machine-config-daemon-slhjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:45:51 crc kubenswrapper[5125]: I1208 19:45:51.102238 5125 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" podUID="d8cea827-b8e3-4d92-adea-df0afd2397da" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:45:51 crc kubenswrapper[5125]: I1208 19:45:51.102315 5125 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" Dec 08 19:45:51 crc kubenswrapper[5125]: I1208 19:45:51.103290 5125 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"182a5753b7665f64b7e1bda17a1b8b8ee7e43a6725053ecf79f5513fca73d87e"} pod="openshift-machine-config-operator/machine-config-daemon-slhjr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 19:45:51 crc kubenswrapper[5125]: I1208 19:45:51.103430 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" podUID="d8cea827-b8e3-4d92-adea-df0afd2397da" containerName="machine-config-daemon" containerID="cri-o://182a5753b7665f64b7e1bda17a1b8b8ee7e43a6725053ecf79f5513fca73d87e" gracePeriod=600 Dec 08 19:45:51 crc kubenswrapper[5125]: I1208 19:45:51.666029 5125 generic.go:358] "Generic (PLEG): container finished" podID="d8cea827-b8e3-4d92-adea-df0afd2397da" containerID="182a5753b7665f64b7e1bda17a1b8b8ee7e43a6725053ecf79f5513fca73d87e" exitCode=0 Dec 08 19:45:51 crc kubenswrapper[5125]: I1208 19:45:51.666121 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" event={"ID":"d8cea827-b8e3-4d92-adea-df0afd2397da","Type":"ContainerDied","Data":"182a5753b7665f64b7e1bda17a1b8b8ee7e43a6725053ecf79f5513fca73d87e"} Dec 08 19:45:51 crc kubenswrapper[5125]: I1208 19:45:51.666515 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" event={"ID":"d8cea827-b8e3-4d92-adea-df0afd2397da","Type":"ContainerStarted","Data":"18ed85182f6c7e4a70aa4786e3545f088fdb29af31c15e5b8127848a09d46d96"} Dec 08 19:45:51 crc kubenswrapper[5125]: I1208 19:45:51.666542 5125 scope.go:117] "RemoveContainer" containerID="f9eb1c7e5f36182d845fb8ea13653363a63738eedc2b7b6ae1600d40f21292c7" Dec 08 19:45:52 crc kubenswrapper[5125]: E1208 19:45:52.749549 5125 certificate_manager.go:613] "Certificate request was not signed" err="timed out waiting for the condition" logger="kubernetes.io/kubelet-serving.UnhandledError" Dec 08 19:45:54 crc kubenswrapper[5125]: I1208 19:45:54.783725 5125 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 08 19:45:54 crc kubenswrapper[5125]: I1208 19:45:54.796619 5125 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 19:45:54 crc kubenswrapper[5125]: I1208 19:45:54.818658 5125 ???:1] "http: TLS handshake error from 192.168.126.11:55232: no serving certificate available for the kubelet" Dec 08 19:45:54 crc kubenswrapper[5125]: I1208 19:45:54.849353 5125 ???:1] "http: TLS handshake error from 192.168.126.11:55236: no serving certificate available for the kubelet" Dec 08 19:45:54 crc kubenswrapper[5125]: I1208 19:45:54.877543 5125 ???:1] "http: TLS handshake error from 192.168.126.11:55252: no serving certificate available for the kubelet" Dec 08 19:45:54 crc kubenswrapper[5125]: I1208 19:45:54.917466 5125 ???:1] "http: TLS handshake error from 192.168.126.11:55254: no serving certificate available for the kubelet" Dec 08 19:45:54 crc kubenswrapper[5125]: I1208 19:45:54.980666 5125 ???:1] "http: TLS handshake error from 192.168.126.11:55260: no serving certificate available for the kubelet" Dec 08 19:45:55 crc kubenswrapper[5125]: I1208 19:45:55.085190 5125 ???:1] "http: TLS handshake error from 192.168.126.11:55262: no serving certificate available for the kubelet" Dec 08 19:45:55 crc kubenswrapper[5125]: I1208 19:45:55.280788 5125 ???:1] "http: TLS handshake error from 192.168.126.11:55276: no serving certificate available for the kubelet" Dec 08 19:45:55 crc kubenswrapper[5125]: I1208 19:45:55.627913 5125 ???:1] "http: TLS handshake error from 192.168.126.11:55292: no serving certificate available for the kubelet" Dec 08 19:45:56 crc kubenswrapper[5125]: I1208 19:45:56.303514 5125 ???:1] "http: TLS handshake error from 192.168.126.11:55294: no serving certificate available for the kubelet" Dec 08 19:45:57 crc kubenswrapper[5125]: I1208 19:45:57.617776 5125 ???:1] "http: TLS handshake error from 192.168.126.11:55296: no serving certificate available for the kubelet" Dec 08 19:46:00 crc kubenswrapper[5125]: I1208 19:46:00.199211 5125 ???:1] "http: TLS handshake error from 192.168.126.11:55308: no serving certificate available for the kubelet" Dec 08 19:46:05 crc kubenswrapper[5125]: I1208 19:46:05.355324 5125 ???:1] "http: TLS handshake error from 192.168.126.11:40526: no serving certificate available for the kubelet" Dec 08 19:46:07 crc kubenswrapper[5125]: I1208 19:46:07.431219 5125 ???:1] "http: TLS handshake error from 192.168.126.11:40534: no serving certificate available for the kubelet" Dec 08 19:46:15 crc kubenswrapper[5125]: I1208 19:46:15.621241 5125 ???:1] "http: TLS handshake error from 192.168.126.11:53078: no serving certificate available for the kubelet" Dec 08 19:46:36 crc kubenswrapper[5125]: I1208 19:46:36.131702 5125 ???:1] "http: TLS handshake error from 192.168.126.11:37338: no serving certificate available for the kubelet" Dec 08 19:46:37 crc kubenswrapper[5125]: I1208 19:46:37.609808 5125 ???:1] "http: TLS handshake error from 192.168.126.11:37340: no serving certificate available for the kubelet" Dec 08 19:46:49 crc kubenswrapper[5125]: I1208 19:46:49.551792 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-vmt57"] Dec 08 19:46:49 crc kubenswrapper[5125]: I1208 19:46:49.553135 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f4834735-4658-450c-b286-08fd815ceb02" containerName="smoketest-collectd" Dec 08 19:46:49 crc kubenswrapper[5125]: I1208 19:46:49.553151 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4834735-4658-450c-b286-08fd815ceb02" containerName="smoketest-collectd" Dec 08 19:46:49 crc kubenswrapper[5125]: I1208 19:46:49.553171 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f4834735-4658-450c-b286-08fd815ceb02" containerName="smoketest-ceilometer" Dec 08 19:46:49 crc kubenswrapper[5125]: I1208 19:46:49.553178 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4834735-4658-450c-b286-08fd815ceb02" containerName="smoketest-ceilometer" Dec 08 19:46:49 crc kubenswrapper[5125]: I1208 19:46:49.553212 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a370250a-63fe-4722-8041-7c61ffc91ef7" containerName="collect-profiles" Dec 08 19:46:49 crc kubenswrapper[5125]: I1208 19:46:49.553220 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="a370250a-63fe-4722-8041-7c61ffc91ef7" containerName="collect-profiles" Dec 08 19:46:49 crc kubenswrapper[5125]: I1208 19:46:49.553346 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="a370250a-63fe-4722-8041-7c61ffc91ef7" containerName="collect-profiles" Dec 08 19:46:49 crc kubenswrapper[5125]: I1208 19:46:49.553358 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="f4834735-4658-450c-b286-08fd815ceb02" containerName="smoketest-ceilometer" Dec 08 19:46:49 crc kubenswrapper[5125]: I1208 19:46:49.553371 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="f4834735-4658-450c-b286-08fd815ceb02" containerName="smoketest-collectd" Dec 08 19:46:49 crc kubenswrapper[5125]: I1208 19:46:49.561005 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-vmt57" Dec 08 19:46:49 crc kubenswrapper[5125]: I1208 19:46:49.563336 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-vmt57"] Dec 08 19:46:49 crc kubenswrapper[5125]: I1208 19:46:49.699295 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdk9q\" (UniqueName: \"kubernetes.io/projected/3350a1c9-a1ae-4f49-b9e0-b1d49cdddfa6-kube-api-access-gdk9q\") pod \"infrawatch-operators-vmt57\" (UID: \"3350a1c9-a1ae-4f49-b9e0-b1d49cdddfa6\") " pod="service-telemetry/infrawatch-operators-vmt57" Dec 08 19:46:49 crc kubenswrapper[5125]: I1208 19:46:49.801123 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gdk9q\" (UniqueName: \"kubernetes.io/projected/3350a1c9-a1ae-4f49-b9e0-b1d49cdddfa6-kube-api-access-gdk9q\") pod \"infrawatch-operators-vmt57\" (UID: \"3350a1c9-a1ae-4f49-b9e0-b1d49cdddfa6\") " pod="service-telemetry/infrawatch-operators-vmt57" Dec 08 19:46:49 crc kubenswrapper[5125]: I1208 19:46:49.829926 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdk9q\" (UniqueName: \"kubernetes.io/projected/3350a1c9-a1ae-4f49-b9e0-b1d49cdddfa6-kube-api-access-gdk9q\") pod \"infrawatch-operators-vmt57\" (UID: \"3350a1c9-a1ae-4f49-b9e0-b1d49cdddfa6\") " pod="service-telemetry/infrawatch-operators-vmt57" Dec 08 19:46:49 crc kubenswrapper[5125]: I1208 19:46:49.879594 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-vmt57" Dec 08 19:46:50 crc kubenswrapper[5125]: I1208 19:46:50.330091 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-vmt57"] Dec 08 19:46:51 crc kubenswrapper[5125]: I1208 19:46:51.191732 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-vmt57" event={"ID":"3350a1c9-a1ae-4f49-b9e0-b1d49cdddfa6","Type":"ContainerStarted","Data":"b64b8e468760065f707e18b8f7344af445e021ec02d341596f563b00bfca089e"} Dec 08 19:46:51 crc kubenswrapper[5125]: I1208 19:46:51.192153 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-vmt57" event={"ID":"3350a1c9-a1ae-4f49-b9e0-b1d49cdddfa6","Type":"ContainerStarted","Data":"f95b1a863a705adb01e33976c97274192f6b44cf06f77b92b5ad6d2e3895e69c"} Dec 08 19:46:51 crc kubenswrapper[5125]: I1208 19:46:51.218018 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-vmt57" podStartSLOduration=1.705957149 podStartE2EDuration="2.217993474s" podCreationTimestamp="2025-12-08 19:46:49 +0000 UTC" firstStartedPulling="2025-12-08 19:46:50.339283058 +0000 UTC m=+1067.109773332" lastFinishedPulling="2025-12-08 19:46:50.851319383 +0000 UTC m=+1067.621809657" observedRunningTime="2025-12-08 19:46:51.210068358 +0000 UTC m=+1067.980558652" watchObservedRunningTime="2025-12-08 19:46:51.217993474 +0000 UTC m=+1067.988483758" Dec 08 19:46:59 crc kubenswrapper[5125]: I1208 19:46:59.880287 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-vmt57" Dec 08 19:46:59 crc kubenswrapper[5125]: I1208 19:46:59.881982 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-vmt57" Dec 08 19:46:59 crc kubenswrapper[5125]: I1208 19:46:59.910191 5125 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-vmt57" Dec 08 19:47:00 crc kubenswrapper[5125]: I1208 19:47:00.302344 5125 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-vmt57" Dec 08 19:47:03 crc kubenswrapper[5125]: I1208 19:47:03.335184 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-vmt57"] Dec 08 19:47:03 crc kubenswrapper[5125]: I1208 19:47:03.335849 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-vmt57" podUID="3350a1c9-a1ae-4f49-b9e0-b1d49cdddfa6" containerName="registry-server" containerID="cri-o://b64b8e468760065f707e18b8f7344af445e021ec02d341596f563b00bfca089e" gracePeriod=2 Dec 08 19:47:03 crc kubenswrapper[5125]: I1208 19:47:03.702648 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-vmt57" Dec 08 19:47:03 crc kubenswrapper[5125]: I1208 19:47:03.724138 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdk9q\" (UniqueName: \"kubernetes.io/projected/3350a1c9-a1ae-4f49-b9e0-b1d49cdddfa6-kube-api-access-gdk9q\") pod \"3350a1c9-a1ae-4f49-b9e0-b1d49cdddfa6\" (UID: \"3350a1c9-a1ae-4f49-b9e0-b1d49cdddfa6\") " Dec 08 19:47:03 crc kubenswrapper[5125]: I1208 19:47:03.730032 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3350a1c9-a1ae-4f49-b9e0-b1d49cdddfa6-kube-api-access-gdk9q" (OuterVolumeSpecName: "kube-api-access-gdk9q") pod "3350a1c9-a1ae-4f49-b9e0-b1d49cdddfa6" (UID: "3350a1c9-a1ae-4f49-b9e0-b1d49cdddfa6"). InnerVolumeSpecName "kube-api-access-gdk9q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:47:03 crc kubenswrapper[5125]: I1208 19:47:03.825939 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gdk9q\" (UniqueName: \"kubernetes.io/projected/3350a1c9-a1ae-4f49-b9e0-b1d49cdddfa6-kube-api-access-gdk9q\") on node \"crc\" DevicePath \"\"" Dec 08 19:47:04 crc kubenswrapper[5125]: I1208 19:47:04.304172 5125 generic.go:358] "Generic (PLEG): container finished" podID="3350a1c9-a1ae-4f49-b9e0-b1d49cdddfa6" containerID="b64b8e468760065f707e18b8f7344af445e021ec02d341596f563b00bfca089e" exitCode=0 Dec 08 19:47:04 crc kubenswrapper[5125]: I1208 19:47:04.304301 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-vmt57" Dec 08 19:47:04 crc kubenswrapper[5125]: I1208 19:47:04.304326 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-vmt57" event={"ID":"3350a1c9-a1ae-4f49-b9e0-b1d49cdddfa6","Type":"ContainerDied","Data":"b64b8e468760065f707e18b8f7344af445e021ec02d341596f563b00bfca089e"} Dec 08 19:47:04 crc kubenswrapper[5125]: I1208 19:47:04.304391 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-vmt57" event={"ID":"3350a1c9-a1ae-4f49-b9e0-b1d49cdddfa6","Type":"ContainerDied","Data":"f95b1a863a705adb01e33976c97274192f6b44cf06f77b92b5ad6d2e3895e69c"} Dec 08 19:47:04 crc kubenswrapper[5125]: I1208 19:47:04.304413 5125 scope.go:117] "RemoveContainer" containerID="b64b8e468760065f707e18b8f7344af445e021ec02d341596f563b00bfca089e" Dec 08 19:47:04 crc kubenswrapper[5125]: I1208 19:47:04.328538 5125 scope.go:117] "RemoveContainer" containerID="b64b8e468760065f707e18b8f7344af445e021ec02d341596f563b00bfca089e" Dec 08 19:47:04 crc kubenswrapper[5125]: E1208 19:47:04.328960 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b64b8e468760065f707e18b8f7344af445e021ec02d341596f563b00bfca089e\": container with ID starting with b64b8e468760065f707e18b8f7344af445e021ec02d341596f563b00bfca089e not found: ID does not exist" containerID="b64b8e468760065f707e18b8f7344af445e021ec02d341596f563b00bfca089e" Dec 08 19:47:04 crc kubenswrapper[5125]: I1208 19:47:04.329040 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b64b8e468760065f707e18b8f7344af445e021ec02d341596f563b00bfca089e"} err="failed to get container status \"b64b8e468760065f707e18b8f7344af445e021ec02d341596f563b00bfca089e\": rpc error: code = NotFound desc = could not find container \"b64b8e468760065f707e18b8f7344af445e021ec02d341596f563b00bfca089e\": container with ID starting with b64b8e468760065f707e18b8f7344af445e021ec02d341596f563b00bfca089e not found: ID does not exist" Dec 08 19:47:04 crc kubenswrapper[5125]: I1208 19:47:04.332647 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-vmt57"] Dec 08 19:47:04 crc kubenswrapper[5125]: I1208 19:47:04.337692 5125 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-vmt57"] Dec 08 19:47:05 crc kubenswrapper[5125]: I1208 19:47:05.775100 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3350a1c9-a1ae-4f49-b9e0-b1d49cdddfa6" path="/var/lib/kubelet/pods/3350a1c9-a1ae-4f49-b9e0-b1d49cdddfa6/volumes" Dec 08 19:47:08 crc kubenswrapper[5125]: I1208 19:47:08.910152 5125 ???:1] "http: TLS handshake error from 192.168.126.11:49456: no serving certificate available for the kubelet" Dec 08 19:47:09 crc kubenswrapper[5125]: I1208 19:47:09.204685 5125 ???:1] "http: TLS handshake error from 192.168.126.11:49472: no serving certificate available for the kubelet" Dec 08 19:47:09 crc kubenswrapper[5125]: I1208 19:47:09.458216 5125 ???:1] "http: TLS handshake error from 192.168.126.11:49488: no serving certificate available for the kubelet" Dec 08 19:47:09 crc kubenswrapper[5125]: I1208 19:47:09.726853 5125 ???:1] "http: TLS handshake error from 192.168.126.11:49502: no serving certificate available for the kubelet" Dec 08 19:47:10 crc kubenswrapper[5125]: I1208 19:47:10.053651 5125 ???:1] "http: TLS handshake error from 192.168.126.11:49506: no serving certificate available for the kubelet" Dec 08 19:47:10 crc kubenswrapper[5125]: I1208 19:47:10.342246 5125 ???:1] "http: TLS handshake error from 192.168.126.11:49512: no serving certificate available for the kubelet" Dec 08 19:47:10 crc kubenswrapper[5125]: I1208 19:47:10.643223 5125 ???:1] "http: TLS handshake error from 192.168.126.11:49520: no serving certificate available for the kubelet" Dec 08 19:47:10 crc kubenswrapper[5125]: I1208 19:47:10.979951 5125 ???:1] "http: TLS handshake error from 192.168.126.11:49536: no serving certificate available for the kubelet" Dec 08 19:47:11 crc kubenswrapper[5125]: I1208 19:47:11.285735 5125 ???:1] "http: TLS handshake error from 192.168.126.11:49546: no serving certificate available for the kubelet" Dec 08 19:47:11 crc kubenswrapper[5125]: I1208 19:47:11.612684 5125 ???:1] "http: TLS handshake error from 192.168.126.11:49552: no serving certificate available for the kubelet" Dec 08 19:47:11 crc kubenswrapper[5125]: I1208 19:47:11.925668 5125 ???:1] "http: TLS handshake error from 192.168.126.11:49558: no serving certificate available for the kubelet" Dec 08 19:47:12 crc kubenswrapper[5125]: I1208 19:47:12.205743 5125 ???:1] "http: TLS handshake error from 192.168.126.11:49572: no serving certificate available for the kubelet" Dec 08 19:47:12 crc kubenswrapper[5125]: I1208 19:47:12.491061 5125 ???:1] "http: TLS handshake error from 192.168.126.11:36258: no serving certificate available for the kubelet" Dec 08 19:47:12 crc kubenswrapper[5125]: I1208 19:47:12.771938 5125 ???:1] "http: TLS handshake error from 192.168.126.11:36274: no serving certificate available for the kubelet" Dec 08 19:47:13 crc kubenswrapper[5125]: I1208 19:47:13.093691 5125 ???:1] "http: TLS handshake error from 192.168.126.11:36282: no serving certificate available for the kubelet" Dec 08 19:47:13 crc kubenswrapper[5125]: I1208 19:47:13.360402 5125 ???:1] "http: TLS handshake error from 192.168.126.11:36290: no serving certificate available for the kubelet" Dec 08 19:47:13 crc kubenswrapper[5125]: I1208 19:47:13.593956 5125 ???:1] "http: TLS handshake error from 192.168.126.11:36300: no serving certificate available for the kubelet" Dec 08 19:47:13 crc kubenswrapper[5125]: I1208 19:47:13.859848 5125 ???:1] "http: TLS handshake error from 192.168.126.11:36314: no serving certificate available for the kubelet" Dec 08 19:47:17 crc kubenswrapper[5125]: I1208 19:47:17.123235 5125 ???:1] "http: TLS handshake error from 192.168.126.11:36326: no serving certificate available for the kubelet" Dec 08 19:47:27 crc kubenswrapper[5125]: I1208 19:47:27.786414 5125 ???:1] "http: TLS handshake error from 192.168.126.11:39622: no serving certificate available for the kubelet" Dec 08 19:47:28 crc kubenswrapper[5125]: I1208 19:47:28.105737 5125 ???:1] "http: TLS handshake error from 192.168.126.11:39624: no serving certificate available for the kubelet" Dec 08 19:47:28 crc kubenswrapper[5125]: I1208 19:47:28.355570 5125 ???:1] "http: TLS handshake error from 192.168.126.11:39634: no serving certificate available for the kubelet" Dec 08 19:47:51 crc kubenswrapper[5125]: I1208 19:47:51.101219 5125 patch_prober.go:28] interesting pod/machine-config-daemon-slhjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:47:51 crc kubenswrapper[5125]: I1208 19:47:51.101858 5125 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" podUID="d8cea827-b8e3-4d92-adea-df0afd2397da" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:47:53 crc kubenswrapper[5125]: I1208 19:47:53.292958 5125 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wr9n2/must-gather-25hll"] Dec 08 19:47:53 crc kubenswrapper[5125]: I1208 19:47:53.294464 5125 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3350a1c9-a1ae-4f49-b9e0-b1d49cdddfa6" containerName="registry-server" Dec 08 19:47:53 crc kubenswrapper[5125]: I1208 19:47:53.294496 5125 state_mem.go:107] "Deleted CPUSet assignment" podUID="3350a1c9-a1ae-4f49-b9e0-b1d49cdddfa6" containerName="registry-server" Dec 08 19:47:53 crc kubenswrapper[5125]: I1208 19:47:53.294759 5125 memory_manager.go:356] "RemoveStaleState removing state" podUID="3350a1c9-a1ae-4f49-b9e0-b1d49cdddfa6" containerName="registry-server" Dec 08 19:47:53 crc kubenswrapper[5125]: I1208 19:47:53.302302 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wr9n2/must-gather-25hll" Dec 08 19:47:53 crc kubenswrapper[5125]: I1208 19:47:53.305777 5125 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-wr9n2\"/\"default-dockercfg-n4hsm\"" Dec 08 19:47:53 crc kubenswrapper[5125]: I1208 19:47:53.306012 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-wr9n2\"/\"kube-root-ca.crt\"" Dec 08 19:47:53 crc kubenswrapper[5125]: I1208 19:47:53.310217 5125 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-wr9n2\"/\"openshift-service-ca.crt\"" Dec 08 19:47:53 crc kubenswrapper[5125]: I1208 19:47:53.310917 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wr9n2/must-gather-25hll"] Dec 08 19:47:53 crc kubenswrapper[5125]: I1208 19:47:53.414552 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9glw\" (UniqueName: \"kubernetes.io/projected/8e82d75b-4e79-429c-97e3-8d2cedeadbe7-kube-api-access-p9glw\") pod \"must-gather-25hll\" (UID: \"8e82d75b-4e79-429c-97e3-8d2cedeadbe7\") " pod="openshift-must-gather-wr9n2/must-gather-25hll" Dec 08 19:47:53 crc kubenswrapper[5125]: I1208 19:47:53.414661 5125 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8e82d75b-4e79-429c-97e3-8d2cedeadbe7-must-gather-output\") pod \"must-gather-25hll\" (UID: \"8e82d75b-4e79-429c-97e3-8d2cedeadbe7\") " pod="openshift-must-gather-wr9n2/must-gather-25hll" Dec 08 19:47:53 crc kubenswrapper[5125]: I1208 19:47:53.516339 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p9glw\" (UniqueName: \"kubernetes.io/projected/8e82d75b-4e79-429c-97e3-8d2cedeadbe7-kube-api-access-p9glw\") pod \"must-gather-25hll\" (UID: \"8e82d75b-4e79-429c-97e3-8d2cedeadbe7\") " pod="openshift-must-gather-wr9n2/must-gather-25hll" Dec 08 19:47:53 crc kubenswrapper[5125]: I1208 19:47:53.516416 5125 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8e82d75b-4e79-429c-97e3-8d2cedeadbe7-must-gather-output\") pod \"must-gather-25hll\" (UID: \"8e82d75b-4e79-429c-97e3-8d2cedeadbe7\") " pod="openshift-must-gather-wr9n2/must-gather-25hll" Dec 08 19:47:53 crc kubenswrapper[5125]: I1208 19:47:53.517016 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8e82d75b-4e79-429c-97e3-8d2cedeadbe7-must-gather-output\") pod \"must-gather-25hll\" (UID: \"8e82d75b-4e79-429c-97e3-8d2cedeadbe7\") " pod="openshift-must-gather-wr9n2/must-gather-25hll" Dec 08 19:47:53 crc kubenswrapper[5125]: I1208 19:47:53.535828 5125 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9glw\" (UniqueName: \"kubernetes.io/projected/8e82d75b-4e79-429c-97e3-8d2cedeadbe7-kube-api-access-p9glw\") pod \"must-gather-25hll\" (UID: \"8e82d75b-4e79-429c-97e3-8d2cedeadbe7\") " pod="openshift-must-gather-wr9n2/must-gather-25hll" Dec 08 19:47:53 crc kubenswrapper[5125]: I1208 19:47:53.632864 5125 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wr9n2/must-gather-25hll" Dec 08 19:47:54 crc kubenswrapper[5125]: I1208 19:47:54.072310 5125 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wr9n2/must-gather-25hll"] Dec 08 19:47:54 crc kubenswrapper[5125]: I1208 19:47:54.748358 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wr9n2/must-gather-25hll" event={"ID":"8e82d75b-4e79-429c-97e3-8d2cedeadbe7","Type":"ContainerStarted","Data":"3dc97c5ab9b5c31e35ca4b0cac59d1216160d199d10deae85f92129a23369a47"} Dec 08 19:47:59 crc kubenswrapper[5125]: I1208 19:47:59.787991 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wr9n2/must-gather-25hll" event={"ID":"8e82d75b-4e79-429c-97e3-8d2cedeadbe7","Type":"ContainerStarted","Data":"6ebcb523b9bf3dcf9c470f51b51fd8f7fa734645c6ce99aa425eec10e1479247"} Dec 08 19:47:59 crc kubenswrapper[5125]: I1208 19:47:59.788579 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wr9n2/must-gather-25hll" event={"ID":"8e82d75b-4e79-429c-97e3-8d2cedeadbe7","Type":"ContainerStarted","Data":"2e3160c86390de92c6b669d3bdae03ff1240cf125c9ac57fa7361bc03a9abe65"} Dec 08 19:47:59 crc kubenswrapper[5125]: I1208 19:47:59.803450 5125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wr9n2/must-gather-25hll" podStartSLOduration=2.14073993 podStartE2EDuration="6.803428617s" podCreationTimestamp="2025-12-08 19:47:53 +0000 UTC" firstStartedPulling="2025-12-08 19:47:54.071452538 +0000 UTC m=+1130.841942832" lastFinishedPulling="2025-12-08 19:47:58.734141245 +0000 UTC m=+1135.504631519" observedRunningTime="2025-12-08 19:47:59.800319353 +0000 UTC m=+1136.570809647" watchObservedRunningTime="2025-12-08 19:47:59.803428617 +0000 UTC m=+1136.573918901" Dec 08 19:48:03 crc kubenswrapper[5125]: I1208 19:48:03.289021 5125 ???:1] "http: TLS handshake error from 192.168.126.11:37118: no serving certificate available for the kubelet" Dec 08 19:48:11 crc kubenswrapper[5125]: I1208 19:48:11.312309 5125 scope.go:117] "RemoveContainer" containerID="9a3e300929e19dac671dcbdd8dcbcb9b092cda296ed12cbc4db622b83d1f0c5c" Dec 08 19:48:11 crc kubenswrapper[5125]: I1208 19:48:11.344904 5125 scope.go:117] "RemoveContainer" containerID="9f1395926e3c5daa06fd8c8b2a1de8dbd5ea3b77be2b36bed9f099243665b37e" Dec 08 19:48:11 crc kubenswrapper[5125]: I1208 19:48:11.371744 5125 scope.go:117] "RemoveContainer" containerID="e58530b9bf5f8828dcc1dc4ab8e79fbd1f76461d0990acb8d36418297e9a293f" Dec 08 19:48:21 crc kubenswrapper[5125]: I1208 19:48:21.100984 5125 patch_prober.go:28] interesting pod/machine-config-daemon-slhjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:48:21 crc kubenswrapper[5125]: I1208 19:48:21.101579 5125 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" podUID="d8cea827-b8e3-4d92-adea-df0afd2397da" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:48:37 crc kubenswrapper[5125]: I1208 19:48:37.701400 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38346: no serving certificate available for the kubelet" Dec 08 19:48:37 crc kubenswrapper[5125]: I1208 19:48:37.863815 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38360: no serving certificate available for the kubelet" Dec 08 19:48:37 crc kubenswrapper[5125]: I1208 19:48:37.868906 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38366: no serving certificate available for the kubelet" Dec 08 19:48:39 crc kubenswrapper[5125]: I1208 19:48:39.074086 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38382: no serving certificate available for the kubelet" Dec 08 19:48:48 crc kubenswrapper[5125]: I1208 19:48:48.714940 5125 ???:1] "http: TLS handshake error from 192.168.126.11:55656: no serving certificate available for the kubelet" Dec 08 19:48:48 crc kubenswrapper[5125]: I1208 19:48:48.894913 5125 ???:1] "http: TLS handshake error from 192.168.126.11:55658: no serving certificate available for the kubelet" Dec 08 19:48:48 crc kubenswrapper[5125]: I1208 19:48:48.908465 5125 ???:1] "http: TLS handshake error from 192.168.126.11:55674: no serving certificate available for the kubelet" Dec 08 19:48:51 crc kubenswrapper[5125]: I1208 19:48:51.101335 5125 patch_prober.go:28] interesting pod/machine-config-daemon-slhjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:48:51 crc kubenswrapper[5125]: I1208 19:48:51.101873 5125 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" podUID="d8cea827-b8e3-4d92-adea-df0afd2397da" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:48:51 crc kubenswrapper[5125]: I1208 19:48:51.101943 5125 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" Dec 08 19:48:51 crc kubenswrapper[5125]: I1208 19:48:51.103052 5125 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"18ed85182f6c7e4a70aa4786e3545f088fdb29af31c15e5b8127848a09d46d96"} pod="openshift-machine-config-operator/machine-config-daemon-slhjr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 19:48:51 crc kubenswrapper[5125]: I1208 19:48:51.103163 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" podUID="d8cea827-b8e3-4d92-adea-df0afd2397da" containerName="machine-config-daemon" containerID="cri-o://18ed85182f6c7e4a70aa4786e3545f088fdb29af31c15e5b8127848a09d46d96" gracePeriod=600 Dec 08 19:48:52 crc kubenswrapper[5125]: I1208 19:48:52.172643 5125 generic.go:358] "Generic (PLEG): container finished" podID="d8cea827-b8e3-4d92-adea-df0afd2397da" containerID="18ed85182f6c7e4a70aa4786e3545f088fdb29af31c15e5b8127848a09d46d96" exitCode=0 Dec 08 19:48:52 crc kubenswrapper[5125]: I1208 19:48:52.172643 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" event={"ID":"d8cea827-b8e3-4d92-adea-df0afd2397da","Type":"ContainerDied","Data":"18ed85182f6c7e4a70aa4786e3545f088fdb29af31c15e5b8127848a09d46d96"} Dec 08 19:48:52 crc kubenswrapper[5125]: I1208 19:48:52.173014 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slhjr" event={"ID":"d8cea827-b8e3-4d92-adea-df0afd2397da","Type":"ContainerStarted","Data":"fe5a69493c603924acfd75acf6f1568758c5e2f3c958ba6fff805c5674c9bdbb"} Dec 08 19:48:52 crc kubenswrapper[5125]: I1208 19:48:52.173031 5125 scope.go:117] "RemoveContainer" containerID="182a5753b7665f64b7e1bda17a1b8b8ee7e43a6725053ecf79f5513fca73d87e" Dec 08 19:49:03 crc kubenswrapper[5125]: I1208 19:49:03.460293 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38544: no serving certificate available for the kubelet" Dec 08 19:49:03 crc kubenswrapper[5125]: I1208 19:49:03.658011 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38554: no serving certificate available for the kubelet" Dec 08 19:49:03 crc kubenswrapper[5125]: I1208 19:49:03.671662 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38570: no serving certificate available for the kubelet" Dec 08 19:49:03 crc kubenswrapper[5125]: I1208 19:49:03.679037 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38580: no serving certificate available for the kubelet" Dec 08 19:49:03 crc kubenswrapper[5125]: I1208 19:49:03.832803 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38582: no serving certificate available for the kubelet" Dec 08 19:49:03 crc kubenswrapper[5125]: I1208 19:49:03.851885 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38592: no serving certificate available for the kubelet" Dec 08 19:49:03 crc kubenswrapper[5125]: I1208 19:49:03.861280 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38606: no serving certificate available for the kubelet" Dec 08 19:49:04 crc kubenswrapper[5125]: I1208 19:49:04.010160 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38620: no serving certificate available for the kubelet" Dec 08 19:49:04 crc kubenswrapper[5125]: I1208 19:49:04.138873 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38628: no serving certificate available for the kubelet" Dec 08 19:49:04 crc kubenswrapper[5125]: I1208 19:49:04.171494 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38634: no serving certificate available for the kubelet" Dec 08 19:49:04 crc kubenswrapper[5125]: I1208 19:49:04.201721 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9p7g8_b938d768-ccce-45a6-a982-3f5d6f1a7d98/kube-multus/0.log" Dec 08 19:49:04 crc kubenswrapper[5125]: I1208 19:49:04.209331 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9p7g8_b938d768-ccce-45a6-a982-3f5d6f1a7d98/kube-multus/0.log" Dec 08 19:49:04 crc kubenswrapper[5125]: I1208 19:49:04.219693 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 19:49:04 crc kubenswrapper[5125]: I1208 19:49:04.222021 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38646: no serving certificate available for the kubelet" Dec 08 19:49:04 crc kubenswrapper[5125]: I1208 19:49:04.222165 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 19:49:04 crc kubenswrapper[5125]: I1208 19:49:04.364467 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38654: no serving certificate available for the kubelet" Dec 08 19:49:04 crc kubenswrapper[5125]: I1208 19:49:04.372201 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38664: no serving certificate available for the kubelet" Dec 08 19:49:04 crc kubenswrapper[5125]: I1208 19:49:04.395177 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38666: no serving certificate available for the kubelet" Dec 08 19:49:04 crc kubenswrapper[5125]: I1208 19:49:04.546144 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38672: no serving certificate available for the kubelet" Dec 08 19:49:04 crc kubenswrapper[5125]: I1208 19:49:04.685994 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38682: no serving certificate available for the kubelet" Dec 08 19:49:04 crc kubenswrapper[5125]: I1208 19:49:04.728223 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38688: no serving certificate available for the kubelet" Dec 08 19:49:04 crc kubenswrapper[5125]: I1208 19:49:04.728645 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38698: no serving certificate available for the kubelet" Dec 08 19:49:04 crc kubenswrapper[5125]: I1208 19:49:04.848378 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38712: no serving certificate available for the kubelet" Dec 08 19:49:04 crc kubenswrapper[5125]: I1208 19:49:04.870178 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38728: no serving certificate available for the kubelet" Dec 08 19:49:04 crc kubenswrapper[5125]: I1208 19:49:04.916949 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38740: no serving certificate available for the kubelet" Dec 08 19:49:05 crc kubenswrapper[5125]: I1208 19:49:05.019306 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38744: no serving certificate available for the kubelet" Dec 08 19:49:05 crc kubenswrapper[5125]: I1208 19:49:05.194031 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38748: no serving certificate available for the kubelet" Dec 08 19:49:05 crc kubenswrapper[5125]: I1208 19:49:05.218980 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38762: no serving certificate available for the kubelet" Dec 08 19:49:05 crc kubenswrapper[5125]: I1208 19:49:05.219546 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38766: no serving certificate available for the kubelet" Dec 08 19:49:05 crc kubenswrapper[5125]: I1208 19:49:05.414383 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38774: no serving certificate available for the kubelet" Dec 08 19:49:05 crc kubenswrapper[5125]: I1208 19:49:05.419370 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38784: no serving certificate available for the kubelet" Dec 08 19:49:05 crc kubenswrapper[5125]: I1208 19:49:05.446186 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38798: no serving certificate available for the kubelet" Dec 08 19:49:05 crc kubenswrapper[5125]: I1208 19:49:05.573885 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38802: no serving certificate available for the kubelet" Dec 08 19:49:05 crc kubenswrapper[5125]: I1208 19:49:05.730332 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38812: no serving certificate available for the kubelet" Dec 08 19:49:05 crc kubenswrapper[5125]: I1208 19:49:05.772536 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38830: no serving certificate available for the kubelet" Dec 08 19:49:05 crc kubenswrapper[5125]: I1208 19:49:05.773816 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38814: no serving certificate available for the kubelet" Dec 08 19:49:05 crc kubenswrapper[5125]: I1208 19:49:05.931447 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38840: no serving certificate available for the kubelet" Dec 08 19:49:05 crc kubenswrapper[5125]: I1208 19:49:05.944314 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38848: no serving certificate available for the kubelet" Dec 08 19:49:05 crc kubenswrapper[5125]: I1208 19:49:05.971886 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38852: no serving certificate available for the kubelet" Dec 08 19:49:05 crc kubenswrapper[5125]: I1208 19:49:05.997793 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38862: no serving certificate available for the kubelet" Dec 08 19:49:06 crc kubenswrapper[5125]: I1208 19:49:06.165317 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38864: no serving certificate available for the kubelet" Dec 08 19:49:06 crc kubenswrapper[5125]: I1208 19:49:06.178480 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38878: no serving certificate available for the kubelet" Dec 08 19:49:06 crc kubenswrapper[5125]: I1208 19:49:06.182478 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38892: no serving certificate available for the kubelet" Dec 08 19:49:06 crc kubenswrapper[5125]: I1208 19:49:06.353029 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38900: no serving certificate available for the kubelet" Dec 08 19:49:06 crc kubenswrapper[5125]: I1208 19:49:06.366463 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38914: no serving certificate available for the kubelet" Dec 08 19:49:06 crc kubenswrapper[5125]: I1208 19:49:06.368499 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38920: no serving certificate available for the kubelet" Dec 08 19:49:06 crc kubenswrapper[5125]: I1208 19:49:06.414513 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38930: no serving certificate available for the kubelet" Dec 08 19:49:06 crc kubenswrapper[5125]: I1208 19:49:06.550722 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38938: no serving certificate available for the kubelet" Dec 08 19:49:06 crc kubenswrapper[5125]: I1208 19:49:06.722283 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38942: no serving certificate available for the kubelet" Dec 08 19:49:06 crc kubenswrapper[5125]: I1208 19:49:06.726177 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38958: no serving certificate available for the kubelet" Dec 08 19:49:06 crc kubenswrapper[5125]: I1208 19:49:06.726734 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38968: no serving certificate available for the kubelet" Dec 08 19:49:06 crc kubenswrapper[5125]: I1208 19:49:06.886261 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38978: no serving certificate available for the kubelet" Dec 08 19:49:06 crc kubenswrapper[5125]: I1208 19:49:06.899978 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38986: no serving certificate available for the kubelet" Dec 08 19:49:06 crc kubenswrapper[5125]: I1208 19:49:06.917726 5125 ???:1] "http: TLS handshake error from 192.168.126.11:38992: no serving certificate available for the kubelet" Dec 08 19:49:11 crc kubenswrapper[5125]: I1208 19:49:11.419872 5125 scope.go:117] "RemoveContainer" containerID="97a8e569439335a9b5882d0098e87e5b4b9cc8bd4da7311912b761c027fa5bd3" Dec 08 19:49:17 crc kubenswrapper[5125]: I1208 19:49:17.901784 5125 ???:1] "http: TLS handshake error from 192.168.126.11:39082: no serving certificate available for the kubelet" Dec 08 19:49:18 crc kubenswrapper[5125]: I1208 19:49:18.034000 5125 ???:1] "http: TLS handshake error from 192.168.126.11:39094: no serving certificate available for the kubelet" Dec 08 19:49:18 crc kubenswrapper[5125]: I1208 19:49:18.063853 5125 ???:1] "http: TLS handshake error from 192.168.126.11:39108: no serving certificate available for the kubelet" Dec 08 19:49:18 crc kubenswrapper[5125]: I1208 19:49:18.221118 5125 ???:1] "http: TLS handshake error from 192.168.126.11:39118: no serving certificate available for the kubelet" Dec 08 19:49:18 crc kubenswrapper[5125]: I1208 19:49:18.243246 5125 ???:1] "http: TLS handshake error from 192.168.126.11:39134: no serving certificate available for the kubelet" Dec 08 19:49:52 crc kubenswrapper[5125]: I1208 19:49:52.634554 5125 generic.go:358] "Generic (PLEG): container finished" podID="8e82d75b-4e79-429c-97e3-8d2cedeadbe7" containerID="2e3160c86390de92c6b669d3bdae03ff1240cf125c9ac57fa7361bc03a9abe65" exitCode=0 Dec 08 19:49:52 crc kubenswrapper[5125]: I1208 19:49:52.634653 5125 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wr9n2/must-gather-25hll" event={"ID":"8e82d75b-4e79-429c-97e3-8d2cedeadbe7","Type":"ContainerDied","Data":"2e3160c86390de92c6b669d3bdae03ff1240cf125c9ac57fa7361bc03a9abe65"} Dec 08 19:49:52 crc kubenswrapper[5125]: I1208 19:49:52.635702 5125 scope.go:117] "RemoveContainer" containerID="2e3160c86390de92c6b669d3bdae03ff1240cf125c9ac57fa7361bc03a9abe65" Dec 08 19:49:53 crc kubenswrapper[5125]: I1208 19:49:53.305537 5125 ???:1] "http: TLS handshake error from 192.168.126.11:51676: no serving certificate available for the kubelet" Dec 08 19:49:53 crc kubenswrapper[5125]: I1208 19:49:53.486227 5125 ???:1] "http: TLS handshake error from 192.168.126.11:51686: no serving certificate available for the kubelet" Dec 08 19:49:53 crc kubenswrapper[5125]: I1208 19:49:53.498191 5125 ???:1] "http: TLS handshake error from 192.168.126.11:51700: no serving certificate available for the kubelet" Dec 08 19:49:53 crc kubenswrapper[5125]: I1208 19:49:53.522164 5125 ???:1] "http: TLS handshake error from 192.168.126.11:51716: no serving certificate available for the kubelet" Dec 08 19:49:53 crc kubenswrapper[5125]: I1208 19:49:53.537026 5125 ???:1] "http: TLS handshake error from 192.168.126.11:51720: no serving certificate available for the kubelet" Dec 08 19:49:53 crc kubenswrapper[5125]: I1208 19:49:53.552349 5125 ???:1] "http: TLS handshake error from 192.168.126.11:51726: no serving certificate available for the kubelet" Dec 08 19:49:53 crc kubenswrapper[5125]: I1208 19:49:53.564551 5125 ???:1] "http: TLS handshake error from 192.168.126.11:51728: no serving certificate available for the kubelet" Dec 08 19:49:53 crc kubenswrapper[5125]: I1208 19:49:53.579342 5125 ???:1] "http: TLS handshake error from 192.168.126.11:51734: no serving certificate available for the kubelet" Dec 08 19:49:53 crc kubenswrapper[5125]: I1208 19:49:53.589815 5125 ???:1] "http: TLS handshake error from 192.168.126.11:51738: no serving certificate available for the kubelet" Dec 08 19:49:53 crc kubenswrapper[5125]: I1208 19:49:53.750658 5125 ???:1] "http: TLS handshake error from 192.168.126.11:51752: no serving certificate available for the kubelet" Dec 08 19:49:53 crc kubenswrapper[5125]: I1208 19:49:53.762380 5125 ???:1] "http: TLS handshake error from 192.168.126.11:51768: no serving certificate available for the kubelet" Dec 08 19:49:53 crc kubenswrapper[5125]: I1208 19:49:53.788325 5125 ???:1] "http: TLS handshake error from 192.168.126.11:51778: no serving certificate available for the kubelet" Dec 08 19:49:53 crc kubenswrapper[5125]: I1208 19:49:53.799388 5125 ???:1] "http: TLS handshake error from 192.168.126.11:51794: no serving certificate available for the kubelet" Dec 08 19:49:53 crc kubenswrapper[5125]: I1208 19:49:53.814307 5125 ???:1] "http: TLS handshake error from 192.168.126.11:51804: no serving certificate available for the kubelet" Dec 08 19:49:53 crc kubenswrapper[5125]: I1208 19:49:53.825439 5125 ???:1] "http: TLS handshake error from 192.168.126.11:51810: no serving certificate available for the kubelet" Dec 08 19:49:53 crc kubenswrapper[5125]: I1208 19:49:53.837348 5125 ???:1] "http: TLS handshake error from 192.168.126.11:51826: no serving certificate available for the kubelet" Dec 08 19:49:53 crc kubenswrapper[5125]: I1208 19:49:53.846958 5125 ???:1] "http: TLS handshake error from 192.168.126.11:51838: no serving certificate available for the kubelet" Dec 08 19:49:58 crc kubenswrapper[5125]: I1208 19:49:58.887426 5125 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wr9n2/must-gather-25hll"] Dec 08 19:49:58 crc kubenswrapper[5125]: I1208 19:49:58.888387 5125 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-wr9n2/must-gather-25hll" podUID="8e82d75b-4e79-429c-97e3-8d2cedeadbe7" containerName="copy" containerID="cri-o://6ebcb523b9bf3dcf9c470f51b51fd8f7fa734645c6ce99aa425eec10e1479247" gracePeriod=2 Dec 08 19:49:58 crc kubenswrapper[5125]: I1208 19:49:58.890993 5125 status_manager.go:895] "Failed to get status for pod" podUID="8e82d75b-4e79-429c-97e3-8d2cedeadbe7" pod="openshift-must-gather-wr9n2/must-gather-25hll" err="pods \"must-gather-25hll\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-wr9n2\": no relationship found between node 'crc' and this object" Dec 08 19:49:58 crc kubenswrapper[5125]: I1208 19:49:58.894514 5125 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wr9n2/must-gather-25hll"] Dec 08 19:49:59 crc kubenswrapper[5125]: I1208 19:49:59.274967 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wr9n2_must-gather-25hll_8e82d75b-4e79-429c-97e3-8d2cedeadbe7/copy/0.log" Dec 08 19:49:59 crc kubenswrapper[5125]: I1208 19:49:59.275857 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wr9n2/must-gather-25hll" Dec 08 19:49:59 crc kubenswrapper[5125]: I1208 19:49:59.277934 5125 status_manager.go:895] "Failed to get status for pod" podUID="8e82d75b-4e79-429c-97e3-8d2cedeadbe7" pod="openshift-must-gather-wr9n2/must-gather-25hll" err="pods \"must-gather-25hll\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-wr9n2\": no relationship found between node 'crc' and this object" Dec 08 19:49:59 crc kubenswrapper[5125]: I1208 19:49:59.315226 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8e82d75b-4e79-429c-97e3-8d2cedeadbe7-must-gather-output\") pod \"8e82d75b-4e79-429c-97e3-8d2cedeadbe7\" (UID: \"8e82d75b-4e79-429c-97e3-8d2cedeadbe7\") " Dec 08 19:49:59 crc kubenswrapper[5125]: I1208 19:49:59.315298 5125 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9glw\" (UniqueName: \"kubernetes.io/projected/8e82d75b-4e79-429c-97e3-8d2cedeadbe7-kube-api-access-p9glw\") pod \"8e82d75b-4e79-429c-97e3-8d2cedeadbe7\" (UID: \"8e82d75b-4e79-429c-97e3-8d2cedeadbe7\") " Dec 08 19:49:59 crc kubenswrapper[5125]: I1208 19:49:59.324879 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e82d75b-4e79-429c-97e3-8d2cedeadbe7-kube-api-access-p9glw" (OuterVolumeSpecName: "kube-api-access-p9glw") pod "8e82d75b-4e79-429c-97e3-8d2cedeadbe7" (UID: "8e82d75b-4e79-429c-97e3-8d2cedeadbe7"). InnerVolumeSpecName "kube-api-access-p9glw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:49:59 crc kubenswrapper[5125]: I1208 19:49:59.370298 5125 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e82d75b-4e79-429c-97e3-8d2cedeadbe7-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "8e82d75b-4e79-429c-97e3-8d2cedeadbe7" (UID: "8e82d75b-4e79-429c-97e3-8d2cedeadbe7"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:49:59 crc kubenswrapper[5125]: I1208 19:49:59.416994 5125 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p9glw\" (UniqueName: \"kubernetes.io/projected/8e82d75b-4e79-429c-97e3-8d2cedeadbe7-kube-api-access-p9glw\") on node \"crc\" DevicePath \"\"" Dec 08 19:49:59 crc kubenswrapper[5125]: I1208 19:49:59.417034 5125 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8e82d75b-4e79-429c-97e3-8d2cedeadbe7-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 08 19:49:59 crc kubenswrapper[5125]: I1208 19:49:59.689709 5125 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wr9n2_must-gather-25hll_8e82d75b-4e79-429c-97e3-8d2cedeadbe7/copy/0.log" Dec 08 19:49:59 crc kubenswrapper[5125]: I1208 19:49:59.690352 5125 generic.go:358] "Generic (PLEG): container finished" podID="8e82d75b-4e79-429c-97e3-8d2cedeadbe7" containerID="6ebcb523b9bf3dcf9c470f51b51fd8f7fa734645c6ce99aa425eec10e1479247" exitCode=143 Dec 08 19:49:59 crc kubenswrapper[5125]: I1208 19:49:59.690426 5125 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wr9n2/must-gather-25hll" Dec 08 19:49:59 crc kubenswrapper[5125]: I1208 19:49:59.690514 5125 scope.go:117] "RemoveContainer" containerID="6ebcb523b9bf3dcf9c470f51b51fd8f7fa734645c6ce99aa425eec10e1479247" Dec 08 19:49:59 crc kubenswrapper[5125]: I1208 19:49:59.694135 5125 status_manager.go:895] "Failed to get status for pod" podUID="8e82d75b-4e79-429c-97e3-8d2cedeadbe7" pod="openshift-must-gather-wr9n2/must-gather-25hll" err="pods \"must-gather-25hll\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-wr9n2\": no relationship found between node 'crc' and this object" Dec 08 19:49:59 crc kubenswrapper[5125]: I1208 19:49:59.706750 5125 status_manager.go:895] "Failed to get status for pod" podUID="8e82d75b-4e79-429c-97e3-8d2cedeadbe7" pod="openshift-must-gather-wr9n2/must-gather-25hll" err="pods \"must-gather-25hll\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-wr9n2\": no relationship found between node 'crc' and this object" Dec 08 19:49:59 crc kubenswrapper[5125]: I1208 19:49:59.709083 5125 scope.go:117] "RemoveContainer" containerID="2e3160c86390de92c6b669d3bdae03ff1240cf125c9ac57fa7361bc03a9abe65" Dec 08 19:49:59 crc kubenswrapper[5125]: I1208 19:49:59.775250 5125 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e82d75b-4e79-429c-97e3-8d2cedeadbe7" path="/var/lib/kubelet/pods/8e82d75b-4e79-429c-97e3-8d2cedeadbe7/volumes" Dec 08 19:49:59 crc kubenswrapper[5125]: I1208 19:49:59.780713 5125 scope.go:117] "RemoveContainer" containerID="6ebcb523b9bf3dcf9c470f51b51fd8f7fa734645c6ce99aa425eec10e1479247" Dec 08 19:49:59 crc kubenswrapper[5125]: E1208 19:49:59.781080 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ebcb523b9bf3dcf9c470f51b51fd8f7fa734645c6ce99aa425eec10e1479247\": container with ID starting with 6ebcb523b9bf3dcf9c470f51b51fd8f7fa734645c6ce99aa425eec10e1479247 not found: ID does not exist" containerID="6ebcb523b9bf3dcf9c470f51b51fd8f7fa734645c6ce99aa425eec10e1479247" Dec 08 19:49:59 crc kubenswrapper[5125]: I1208 19:49:59.781115 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ebcb523b9bf3dcf9c470f51b51fd8f7fa734645c6ce99aa425eec10e1479247"} err="failed to get container status \"6ebcb523b9bf3dcf9c470f51b51fd8f7fa734645c6ce99aa425eec10e1479247\": rpc error: code = NotFound desc = could not find container \"6ebcb523b9bf3dcf9c470f51b51fd8f7fa734645c6ce99aa425eec10e1479247\": container with ID starting with 6ebcb523b9bf3dcf9c470f51b51fd8f7fa734645c6ce99aa425eec10e1479247 not found: ID does not exist" Dec 08 19:49:59 crc kubenswrapper[5125]: I1208 19:49:59.781132 5125 scope.go:117] "RemoveContainer" containerID="2e3160c86390de92c6b669d3bdae03ff1240cf125c9ac57fa7361bc03a9abe65" Dec 08 19:49:59 crc kubenswrapper[5125]: E1208 19:49:59.781348 5125 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e3160c86390de92c6b669d3bdae03ff1240cf125c9ac57fa7361bc03a9abe65\": container with ID starting with 2e3160c86390de92c6b669d3bdae03ff1240cf125c9ac57fa7361bc03a9abe65 not found: ID does not exist" containerID="2e3160c86390de92c6b669d3bdae03ff1240cf125c9ac57fa7361bc03a9abe65" Dec 08 19:49:59 crc kubenswrapper[5125]: I1208 19:49:59.781409 5125 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e3160c86390de92c6b669d3bdae03ff1240cf125c9ac57fa7361bc03a9abe65"} err="failed to get container status \"2e3160c86390de92c6b669d3bdae03ff1240cf125c9ac57fa7361bc03a9abe65\": rpc error: code = NotFound desc = could not find container \"2e3160c86390de92c6b669d3bdae03ff1240cf125c9ac57fa7361bc03a9abe65\": container with ID starting with 2e3160c86390de92c6b669d3bdae03ff1240cf125c9ac57fa7361bc03a9abe65 not found: ID does not exist" Dec 08 19:50:08 crc kubenswrapper[5125]: I1208 19:50:08.307246 5125 ???:1] "http: TLS handshake error from 192.168.126.11:49670: no serving certificate available for the kubelet" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515115626020024443 0ustar coreroot‹íÁ  ÷Om7 €7šÞ'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015115626021017361 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015115623003016501 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015115623003015451 5ustar corecore